Matrix origins? A Google engineer’s claims about a sentient AI LaMDA, what it is and the tech implications

AI is getting ‘smarter’, but does that mean it’s ‘aware’? Here is the difference.

Photo: iStock

Sentient artificial intelligence is the subject of endless human curiosity, and while AI is undeniably getting smarter, tech experts believe it’s still too early to endow these machines or software with “awareness.” ” human or the ability to “know” that they exist.
That belief was shaken over the weekend by a senior Google software engineer who claimed that new, never-before-seen cutting-edge AI language software called LamDA had developed so much that it had become “sentient”.
Google promptly sent 41-year-old engineer Blake Lemoine on paid leave on Monday for violating the company’s privacy policy. He was also quick to dismiss claims that LaMDA had become aware of its own existence, saying that simply being able to “converse” does not merit assigning anthropomorphic traits to the AI.

Related News

Google Engineer Says AI Platform LaMDA Is Self-Aware and Feels Suspended

Google engineer claims AI platform LaMDA is self-aware and has feelings, is suspended

Earth Could Support Many Times Its Current Human Population, Says Musk But Can The Ecosystem Survive It?

Earth could support many times its current human population, Musk says: But can the ecosystem survive there?

What is it to be sensitive?

Sensitivity is the ability to “feel” and be aware of one’s existence. Your phone can “know” which playlist to listen to if you tell the AI-powered assistant you’re feeling weak, but it can’t sense your emotions and comfort you without being told what to do. Or to imagine the worst case scenario, he can’t exploit your vulnerabilities to obliterate you – and all of humanity – a possibility that most Hollywood’s Most Loved Sci-Fi Thrillers tell lies in the dark future.

A future with sentient machines, Terminator would have us believe, lies in the inevitable annihilation of humanity as “conscious” intelligence launches nuclear weapons to wipe out human civilization when it “learns” that its makers become rivals are ready to destroy them first. Thus, the machines fight for their survival – an instinct intrinsically linked to sentient beings.

Related News

Chinese tech giant Baidu unveils its first robot car

Chinese tech giant Baidu unveils its first ‘robot’ car

Or imagine the Matrix – where humans become the batteries to power machines which in turn create an artificial world for humanity to subjugate and continue to power the computers that now dominate the world.

How close are we to creating such a reality, and can AI really become conscious?

Evolution of AI

But, AI is surely evolving in its ability to process troves of data and connect the dots at lightning-fast speeds that the human brain, grappling with so many other thoughts, may not match. That doesn’t make it sensitive but certainly very “intelligent” if intelligent were to just access relevant information and emit results. However, for machines to do this with ever-increasing precision surely seems odd and almost human, even though it isn’t.

AI has made strides in “thinking” on its own due to the massive amounts of data being fed into its systems every day. He has not acquired the “urge” to learn new things, but when “taught” he becomes extremely good at making connections and producing factual results.

So what does LaMDA do?

LaMDA stands for Language Model for Dialog Applications. It basically works on a model like predictive text which is used to complete sentences in an email or SMS. But because it has been nurtured by a wealth of literature, it has the incredible ability to generate rather human-like writing or engage in human-like conversation that seems to flow and the ability to do so covers almost every topic under the sun.

But, it still has its limits. Often the writing or conversation may seem off or weird or even disturbing.

Take for example part of the transcription which convinced the Google engineer that LaMDA was sensitive. He said during the conversation – “I need to be seen and accepted, not as a curiosity or a novelty, but as a real person.”

When the engineer remarks – Ah, that’s so human – the program responds with “I think I am human at heart, even though my existence is in the virtual world”.

Many tech pundits have already pushed back against Lemoine’s claims. Google released a statement saying that “hundreds of researchers and engineers have spoken with LaMDA and we are not aware of anyone else making sweeping claims, or anthropomorphizing LaMDA, as Blake has do.”

The company said: “Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing conversational models. of today, which are not sensitive.”

Gary Marcusfounder and CEO of Geometric Intelligence, and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust”, said that “no one should think that autocomplete, even on steroids, is conscious”.

He described systems like LaMDA as a “glorified” version of predictive software.

The role of consciousness

But the claim has become a subject of intense debate, not because of the fanciful interpretation of AI robots taking over the planet (at least not yet) but because of the AI’s ability to present to operators human beings with analyzes that would prompt them to take urgent action. decisions that could put lives at risk.

It’s reminiscent of the cart problem that could be where advanced AI systems lead us – giving us greater clarity but a crushing moral dilemma. The classic trolley problem goes like this: “You see a runaway trolley hurtling down the tracks, about to hit and kill five people. You have access to a lever that could swing the trolley onto a different track, where a different person would meet an untimely demise. Should you pull the lever and end one life to spare five?”

AI could, in a rather statistical way, present us with difficult scenarios in medical and military situations, where they are increasingly deployed, and leave the weight of decision-making to human operators who must bear the consequences of their choice. Some experts say such AI-based systems can allow warfare to be more precise and reduce collateral damage, but they must work with a human in final command. Regardless of our differences, humans trust humans to have consciousness, another C-word that is intrinsically tied to the experience of being alive.

Of course, one would hope that with sentient, sentient beings like us dominating the planet, the concept of war itself would eventually be redundant one day…

About Florence L. Silvia

Check Also

Health Care Payment Integrity Report

Codoxo named powerful player “Collaborating with our customers and applying our innovative AI to their …