Google engineer claims AI platform LaMDA is self-aware and has feelings, is suspended
Earth could support many times its current human population, Musk says: But can the ecosystem survive there?
A future with sentient machines, Terminator would have us believe, lies in the inevitable annihilation of humanity as “conscious” intelligence launches nuclear weapons to wipe out human civilization when it “learns” that its makers become rivals are ready to destroy them first. Thus, the machines fight for their survival – an instinct intrinsically linked to sentient beings.
Chinese tech giant Baidu unveils its first ‘robot’ car
Or imagine the Matrix – where humans become the batteries to power machines which in turn create an artificial world for humanity to subjugate and continue to power the computers that now dominate the world.
How close are we to creating such a reality, and can AI really become conscious?
But, AI is surely evolving in its ability to process troves of data and connect the dots at lightning-fast speeds that the human brain, grappling with so many other thoughts, may not match. That doesn’t make it sensitive but certainly very “intelligent” if intelligent were to just access relevant information and emit results. However, for machines to do this with ever-increasing precision surely seems odd and almost human, even though it isn’t.
AI has made strides in “thinking” on its own due to the massive amounts of data being fed into its systems every day. He has not acquired the “urge” to learn new things, but when “taught” he becomes extremely good at making connections and producing factual results.
LaMDA stands for Language Model for Dialog Applications. It basically works on a model like predictive text which is used to complete sentences in an email or SMS. But because it has been nurtured by a wealth of literature, it has the incredible ability to generate rather human-like writing or engage in human-like conversation that seems to flow and the ability to do so covers almost every topic under the sun.
But, it still has its limits. Often the writing or conversation may seem off or weird or even disturbing.
When the engineer remarks – Ah, that’s so human – the program responds with “I think I am human at heart, even though my existence is in the virtual world”.
The company said: “Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing conversational models. of today, which are not sensitive.”
He described systems like LaMDA as a “glorified” version of predictive software.
But the claim has become a subject of intense debate, not because of the fanciful interpretation of AI robots taking over the planet (at least not yet) but because of the AI’s ability to present to operators human beings with analyzes that would prompt them to take urgent action. decisions that could put lives at risk.
It’s reminiscent of the cart problem that could be where advanced AI systems lead us – giving us greater clarity but a crushing moral dilemma. The classic trolley problem goes like this: “You see a runaway trolley hurtling down the tracks, about to hit and kill five people. You have access to a lever that could swing the trolley onto a different track, where a different person would meet an untimely demise. Should you pull the lever and end one life to spare five?”
AI could, in a rather statistical way, present us with difficult scenarios in medical and military situations, where they are increasingly deployed, and leave the weight of decision-making to human operators who must bear the consequences of their choice. Some experts say such AI-based systems can allow warfare to be more precise and reduce collateral damage, but they must work with a human in final command. Regardless of our differences, humans trust humans to have consciousness, another C-word that is intrinsically tied to the experience of being alive.
Of course, one would hope that with sentient, sentient beings like us dominating the planet, the concept of war itself would eventually be redundant one day…