In this article, I will provide a step-by-step guide on to set up and use Google PaLM 2 AI model without programming. I hope after this article you can use this tool to start providing better services or even unleashing your new ideas.
During the last decade, many AI models were created by different companies like Amazon by introducing Amazon Lex; one of the most important ones was a text-based language model named ChatGPT trained by OpenAI. With this in mind, giant tech companies like Google try to keep up with the hype and introduce their new technologies or AI models ASAP. Despite the challenging start, Google showed they could develop a capable AI model quickly by introducing the PaLM 2 AI model.
The primary outcome of this guide is to provide a step-by-step guide on how to set up and start using Google PaLM 2 AI model without programming.
Before we dive into the tutorial, let's assume you have a Google Cloud Platform account.
- A Google Cloud account
- Courage to discover new technologies
To set up your Google Vertex AI, follow the below steps:
Head to the Google Vertex AI. Click on Go to console. If prompted, enter your credentials.
Welcome to Vertex AI Dashboard.
From the left menu, click on Language.
Under Start a conversation, click on TEXT CHAT.
Enter a sentence from your own in the box. Though, most probably you will face an error since the Vertex AI API is not enabled yet.
A prompt will be shown. Click on ENABLE.
As you can see, the Vertex AI API is enabled.
You are all set! Try to check out the bot and have a nice conversation with her/him!
What is Hallucination in a chatbot?
Based on Wikipedia, in artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called confabulation or delusion) is a confident response by an AI that does not seem to be justified by its training data, either because it is insufficient, biased or too specialized.
For more information, please refer to the New York Times article on When A.I. Chatbots Hallucinate.
I tried to ask many fictional questions from PaLM 2 and honestly, I was impressed by the reactions of this AI model. Though, I noticed some scenarios it refused to answer my questions since the question I asked was fictional too.
I tried to ask a programming question. Though it was simple, she/he was able to find the bug and provide the correct solution.
Also, I tried to as a few questions and a riddle worth mentioning as well. The chatbot was able to find the answer to the riddle as well.
I am quite amazed at how this model is behaving and we should keep in mind we are talking to the Bison AI model which is not fully loaded yet.
Moreover, what makes PaLM 2 particularly advantageous is that it has been trained more recently than several OpenAI models, going up until February 2023 instead of the usual cutoff date of September 2021. Additionally, the PaLM 2-derived Bison model allows for a maximum input token of 4096 and a maximum output token of 1024.
While it may not match the extensive context length of GPT-4, which can handle 8192 tokens and 32k context, PaLM 2 offers affordability and faster performance for most AI-based applications. In summary, Google regularly fine-tunes and updates PaLM 2 (with the latest update being on May 10th, 2023), making it a favorable choice for developers compared to other options.
In the end, if you find the articles useful, don’t forget to support me at least with a message :)