One of the most sophisticated language processing models and possibly the most well-known chatbot to date, OpenAI’s ChatGPT, has been surpassed by a Google artificial intelligence (AI) model that may be three times bigger and more potent.
According to reports, the new model, known as PaLM (Pathways Language Model), was created in April and utilises Google’s own TPU (Tensor Processing Units) hardware to compete with NVIDIA’s GPUs. When Google activates PaLM, it is anticipated that it would outperform ChatGPT in terms of features, according to Sterling Crispin, an artist and programmer with a focus on NFT art.
This occurred two days after Google issued a “code red” warning regarding ChatGPT’s potential danger to displace Google Search due to its growing popularity.
What is PaLM?
A single model can be scaled across tens of thousands of TPU processors using PaLM, a component of Google’s Pathways system. The technology is anticipated to eventually create a huge multimodal model that can analyse language, sound, and vision all at once.
The Pathways approach, according to Jeff Dean, a Google Senior Fellow and SVP, integrates the benefits of multiple AI systems while addressing some of their drawbacks.
PaLM’s capacity to execute numerous jobs, as opposed to just one, as is the case with the majority of AI models at the moment, is one of its main advantages. This indicates that rather than being specialised for one particular element, it might possibly be trained to execute thousands or perhaps millions of diverse tasks. In addition to increasing PaLM’s efficiency and versatility, this also enables it to take on new responsibilities more quickly and successfully by leveraging its existing skill set.
In order to avoid floods, PaLM, a model trained on comprehending the connection between aerial photographs and landscape elevation, could be used to anticipate how flood waters will move through the terrain. Similar to how the mammalian brain functions, PaLM may be able to generalise between activities, making it a more potent and effective tool for a variety of applications.
PaLM’s capacity to process numerous senses at once is another major benefit. Most current AI models, such as those that process text, images, or audio, can only handle one sort of data at a time; they cannot process all three at once. Due to its inability to properly include all the relevant data, this might cause inaccuracies and biases in the model’s understanding of reality.
PaLM, on the other hand, is anticipated to have the capacity to process several senses simultaneously, including language, vision, and audio. This indicates that it could be able to comprehend the idea of a leopard whether it is examining the word itself, the sound of someone saying “leopard,” or a video of a leopard running.
As a result of being able to use a greater variety of facts to inform its understanding of the world, PaLM may become a more insightful and trustworthy model.
PaLM is anticipated to have a more flexible and adaptable structure than existing models, in addition to having the capacity to carry out various activities and interpret different senses. This may enable it to more successfully generalise between tasks and, as was already said, learn new ones more quickly. Additionally, it might strengthen PaLM’s defences against the kinds of biases and errors that might develop when models are trained using skewed or constrained data sets.
When Can the General Public Purchase PaLM?
PaLM has the potential to transform the way we employ machine learning models to solve a wide range of problems, and overall it appears to be a significant advancement in the field of AI. Although it is unclear when Google will activate PaLM and how it will really fare in comparison to other language processing models in the real world, it is undoubtedly a fascinating development to watch.
Users have expressed doubt about Crispin’s position, claiming that higher performance is not always guaranteed by a language model’s larger size. Size is only one of several variables that might impact a language model’s performance.
We weren’t happy with the information we had, so I had to ask ChatGPT, PaLM’s main competition at the moment, for one final piece of information. When we inquired about if PaLM is truly superior, ChatGPT responded as follows:
Due to my lack of access to knowledge on the capabilities and constraints of PaLM and ChatGPT, I am unable to evaluate their relative performance. The performance of AI models can vary widely depending on a variety of factors, including the particular tasks they are designed to perform, the quality and quantity of the data they are trained on, and the hardware and software they are run on. As a result, it can be challenging to draw firm conclusions from comparisons between AI models.
It is impossible to precisely establish whether PaLM is really three times more strong than ChatGPT without knowing more specifics about PaLM and ChatGPT. When making such assertions, it is necessary to exercise caution because they can be challenging to verify and might not correctly represent the models’ actual capabilities.
Could ChatGPT simply be biassed or envious and saying that? Decide for yourself.