What is LaMDA and why it is (not) Google’s answer to ChatGPT

What is LaMDA and why it is (not) Google's answer to ChatGPT

[ad_1]

It’s one of the most recurring questions, since when artificial intelligences such as ChatGPT and Lensa have been put into circulation and are accessible to all: “But Google does nothing in this area?”. Insiders don’t ask, who obviously know that Google is doing things in this sector, but a lot ordinary people ask, especially on social media: “Why didn’t Google come up with something like this?”.

They seem to wonder even the employees of Mountain View, at least according to what told by The Verge: They would be concerned that these applications could take away the company’s role as a leader in the search engine arena. That is a legitimate but somewhat unfounded fearbecause these AI models are disconnected from the Net and closed, in the sense that they photograph the situation in a precise historical moment: ChatGPT is able to write a sonnet imitating the style of Dante or compose a song that would win the Sanremo Festival but knows practically nothing about the war in Ukraine or the death of Mihajlovic. So nothing can be searched through her. Not really.

Having said that, it must also be said that answering the question is not honestly easy, also because at the moment it seems practically impossible raise the matter with someone inside the company, in Italy or in the United States. At least not before the first half of 2023 when, in the words of CEO, Sundar Pichai (whom we interviewed last May), Google is planning to reveal “a lot of things” in the field of AI applied to language. And yet, some reasoning on the subject can obviously be done.

Interview

The future seen by Annalisa Barla, the professor who teaches machine learning

by Emanuele Capone


The question of reputation

First of all, it should be remembered that Google is not a startup and neither is it OpenAI (the company behind ChatGPT), which is not small but neither is it comparable to the Mountain View giant: if ChatGPT has exceeded one million users, Google responds every day to the requests of billions of people around the world. What does it mean? It means that the stakes are much higher, that the reputation at risk is much greater, che the consequences of any misstep would be multiplied by a thousand. Because it would be Google to do so, not a semi-unknown start-up with high hopes that even if it makes a mistake, nothing happens.

What happened to Meta, with its Galactica according to which “The Colosseum is a shopping centre”what happened in 2016 at Microsoft with the Tay chatbot and in general all risks e the problems, including ethical ones, connected to the development of an AI like ChatGPT, they make it clear that Google is actually right to want to tread lightly.

Also because in Mountain View they already had some difficulty with these topics, since case of Blake Lemoine, the software engineer dismissed after saying the AI ​​he was working on was sentient, to that (more serious) of Timnit Gebru, the scientist fired after denouncing the biases of artificial intelligence.

What is LaMDA and how does it work

Lemoine worked on LaMDA, which can be seen as Google’s answer to ChatGPT. Though it’s not actually Google’s answer to ChatGPT. It seems so because apparently it’s pretty much the same thing, i.e. a chat that has some form of intelligence and answers people’s questions; it is not for at least a couple of reasons: because she was born first and why is continuously updatedunlike ChatGPT, which is closed and stopped at the end of 2021.

By “she was born earlier” we mean much earlier: version 1 was unveiled in May 20212 last June and both are based on Transformer, a neural network architecture that Google developed and made open-source as early as 2017. Its name means Language Model for Dialogue Applicationsi.e. Language Model for Dialog Applications, and LaMDA has learned to do this and does this: it is able to dialogue with people and to chat with them, even (according to Google) showing sensitivity and interest towards what the interlocutors tell her. It does it so well that Lemoine thought it was sentient.

Entertainment

VX project, how Sony wants to decipher the emotions of cinema

by Lorenzo Fantoni


What does Google do with artificial intelligence

Soon he will be able to do it in public too, when the company will allow “small groups of people to download the test app to try it out”, But this is not the point. The point, as we can see, is that Google is clearly active in the field of AI and it has been for a long time, even if perhaps not in such a striking and evident way as OpenAI and Prism Labs (the developers of Lensa). Which therefore will hardly be able to catch her off guard in a sector in which she has been present for over twenty years.

In Mountain View they have started using artificial intelligence as early as 2001for checking the spelling of what people are looking for online (that’s what we see today in suggestions like “did you mean”), and it’s in general searches that are its main applications of this technology, with a further acceleration starting from 2011. Future uses of LaMDA can also be imagined in this field, for example to give more humanity to voice assistant responses, but there are many others:

  • the function neural matching translates searched words into concepts so that “we can provide answers that better address the meaning of the questions”;

  • the algorithm MUM (the acronym stands for Multitask Unified Model, we wrote about it here)who is described as 1000 times more powerful than the previous Bert, “not only understands language, but generates it” and is “trained in 75 different languages”;

  • Multisearch (what’s this?) allows you to search starting from an imageeven photographing something and writing the question directly above it, so as to have results on that specific image;

  • Google’s voice assistant, whether it’s on phones, TVs or smart speakersis obviously based on machine learning algorithms and voice recognition, even coming to understand the implicit meaning and nuances of the various questions;

  • into a Maps computer vision and neural networks are used to reconstruct buildings starting from satellite images, which has made it possible (for example) to increase the number of buildings cataloged in Africa by 5 times since July 2020, from 60 million to almost 300 million;

  • always in Maps, the Immersive View (what’s this?) use machine learning for incredibly realistic 3D reconstructions of monuments, landscapes or significant places.

This as for what we ordinary consumers can experience by hand, even if we may not realize it. Then there’s the whole part we don’t see, because it affects us less closely:

  • the Flood Hub (what’s this?)which uses machine learning and AI to predict flooding and floods, estimating, on the basis of rainfall intensity, where and how soon the level of a river or stream will rise and also how much land it will flood;

  • the many applications in the medical sector (here some examples)with the AIs developed by Google that help doctors to identify breast cancer, lung cancer, retinopathy and also to estimate the risk of heart attack very effectively, even without the need for specific tests;

  • the many applications related to the environment and science, more or less all based on TensorFlow (what’s this?) and on machine learning, from the ability to understand the state of health of plants, cows and land to those that allow recognize the sound of a chainsaw (used to counter deforestation in the Amazon), up to those to imagine how the various molecules, useful in the pharmaceutical sector, will combine;

  • jobs in the field of creativitysuch as using AI to automatically caption over 1 billion videos in 10 languages ​​on YouTube, making them more accessible to hundreds of millions of deaf people, or like Project Magenta (what’s this?)whose goal is to “open source machine learning-based tools for artists and musicians.”

Not to mention everything what is done with the excellent Translate, which since 2020 has also translated hieroglyphs and thanks to AI it works in over 130 languages, of the Real Tone functionality of Pixel 7 and 7 Pro smartphoneswhich more accurately reproduces different skin tones in photos, or also by Imagen, which works similarly to Dall-E 2 (what’s this?) and creates images that did not exist starting from words.

In short: it’s not that Google isn’t there in this field. It’s there but you can’t see it. On the contrary: there is so much that we (almost) no longer notice it.

* “Two robots drinking a glass of wine sitting at a table in a café facing the Eiffel Tower”: opening image created with Dall-E 2

@capoema

[ad_2]

Source link