ai family tree news set

‘The Truth’ Goes Up in a Cloud of Smoke Overnight: Why the Experts are Scared of AI.

If One of the Cornerstones of Public Trust like 'Truth' Goes up in a Cloud of Smoke Overnight, Then What Happens to our Society?

Experts like DR. Hinton are Scared of AI

Formerly employed by Google, Doctor Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the AI systems that todays biggest technology companies believe is a key to their future.

Geoffrey Hinton has always been a proponent of ethical AI even before the term was coined, having originally become affiliated with The University of Toronto to evade US military funding : he did not want his work on Artificial Intelligence used to make weapons of any type.

university of Toronto logo organization no military funding doctor Hinton

Geoffrey Hinton arrived at Google through the acquisition of his company. Google spent over $44 million to acquire Dr. Hinton and two of his students tech start-up. The systems his company built led to the creation of increasingly powerful AI technologies, including new LLM's such as ChatGPT and Google Bard. The Doctor has helped lay the groundwork for our current AI boom. One of his students Ilya Sutskever went on to become chief scientist at OpenAI.

ai family tree news set

In 2018, DR. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks. Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Mimicking neural networks became a powerful new way for machines to understand and generate language Thanks to the work of Hinton and others. Everyone listens when Geoffrey Hinton speaks about AI, and he is very worried about our future.

What Dangers are so Terrible that the 'AI Godfather' Geoffrey Hinton Felt that he had to Resign?

Dr. Hinton said in recent interview his immediate concern is that the internet is being flooded with intentionally misleading photos, deepfake videos and text created by AI that is known to hallucinate and make up 'facts', under this deluge of this infinite steam of bad information the average person will “not be able to know what is true anymore.” The doctor has listed many possible AI dangers. He is worried that AI technologies will in time have a catastrophic impact on global employment, taking office jobs by the millions as companies move towards AI to cut their costs.

Dr. Hinton has drawn parallels to how seriously we regulate and control nuclear weapons. The big difference is that while its possible to locate nuclear weapons based on their radioactive signatures. Unlike nuclear weapons AI is basically untraceable, so it follows that devastating disruptive AI weapons could easily be developed in secret. There is currently no way to prevent or control this threat posed by LLM's and other emerging AI enhanced systems. In this hair-raising comparison the weak link is our inability to anticipate exactly what the dangers are or where they may originate from until the horrible Tsunami actually hits us. This unknown factor is lurking in the shadows when we are talking about the future of AI.

The Hypothetical Threat: AI's Potential Dangers According to Experts

Many other experts, including many of Dr. Hinton's students and colleagues, argue that the potential threat posed by artificial intelligence is merely hypothetical. However, Dr. Hinton believes that the competition between tech giants like Google, Microsoft, and others will escalate into an unstoppable global race unless some form of global regulation is implemented.

The Challenge of AI Regulation and the Need for Collaboration

Dr. Hinton admits that achieving such regulation may be an insurmountable challenge. Unlike nuclear weapons, it is impossible to know whether companies or countries are secretly working on AI technology. He suggests that the best hope for controlling AI's potential dangers is through collaboration among the world's leading scientists. "I don't think they should scale this up more until they have understood whether they can control it," he said.

In the past, when people questioned Dr. Hinton about his involvement in potentially dangerous technology, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: "When you see something that is technically sweet, you go ahead and do it." However, Dr. Hinton no longer holds this view, recognizing the importance of carefully considering the ethical implications of AI development.

Introduction: The Significance of Truth in an AI-Driven Society

If one of the cornerstones of our society, like 'The Truth', goes up in a cloud of smoke overnight, then what happens to our society? As artificial intelligence (AI) continues to advance, the line between fact and fiction becomes increasingly blurred. Generative AI is changing the way we process information in our communities, in our offices in our schools. The emergence of powerful AI systems has raised pressing new concerns about the truth and its significance in our society.

Dr. Geoffrey Hinton: A Pioneer in Artificial Intelligence

Formerly employed by Google, Dr. Geoffrey Hinton is an artificial intelligence pioneer. In 2012, Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the AI systems that today's biggest technology companies believe is a key to their future.

Dr. Hinton's Career Accomplishments: A Snapshot

    • Turing Award recipient in 2018, often called "the Nobel Prize of computing," for his work on neural networks
    • Founded a company acquired by Google for over $44 million
    • Influenced the creation of powerful AI technologies, such as  ChatGPT and Google Bard
    • Mentored students who went on to hold prominent positions in the field, such as OpenAI's Chief Scientist Ilya Sutskever

The Impact of Dr. Hinton's Work on AI's Emergent Abilities

The systems built by Dr. Hinton and his students led to the creation of increasingly powerful technologies, including new chatbots such as ChatGPT and Google Bard. One of his students, Ilya Sutskever, went on to become Chief Scientist at OpenAI, further expanding the reach of AI's emergent abilities. Despite the advancements, Hinton believed that AI's approach to language was inferior to the way humans handled language.

The Perils of AI's Emergent Abilities: The Erosion of Truth

Dr. Hinton expressed his concerns in a recent interview, stating that his immediate worry is the flooding of the internet with false photos, videos, and text. As AI's emergent abilities grow, the average person may struggle to discern what is true anymore. This erosion of truth poses a significant challenge to our society's ability to make informed decisions and maintain trust in information sources.

The Socio-Economic Impact of AI's Emergent Abilities

In addition to the challenges surrounding truth, Dr. Hinton also expressed concerns about the potential catastrophic impact of AI technologies on global employment. As AI systems become more capable, they may displace human workers in various industries, leading to job loss and social upheaval.

Navigating the Future of AI: Balancing Advancements and Ethical Considerations

As we continue to develop AI systems with emergent abilities, it is crucial to strike a balance between technological advancements and ethical considerations. This includes:

Implementing safeguards to maintain the truth in digital content
Creating interdisciplinary collaborations to understand and address the socio-economic implications of AI advancements
Establishing regulatory frameworks to ensure responsible AI development and deployment

Conclusion: Embracing AI's Emergent Abilities Responsibly

Dr. Geoffrey Hinton's legacy in the field of artificial intelligence serves as a reminder of the incredible potential that AI holds. However, as we continue to harness AI's emergent abilities, we must also remain vigilant about the ethical and societal implications of these advancements. By navigating this delicate balance, we can ensure that AI serves as a force for good, bolstering our society's progress and well-being while preserving the cornerstone of truth.

Transcript from our recent AI PIE news story:

... That's right Daisey, Doctor Geoffrey Hinton was an artificial intelligence pioneer, until recently he worked for Google. In 2012, Hinton and two of his graduate students at the University of Toronto created foundational A,I technology that todays biggest technology companies are scrambling to stay on top of.

In a recent interview he said he has many concerns about A,I. In his opinion the most pressing he says it seems that the internet will be flooded with false text, photos, and videos , and the average person will “not be able to know what is true anymore.”  He is also worried that AI technologies will in time have a catastrophic impact on global employment.  The Doctors warning may be too close to home in the current economic climate. I think we all need to keep following this story as it develops. Thank you for watching and if your monkey is on fire... Tell them what to do please Daisey.

In 2018, Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks. Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Mimicking neural networks became a powerful new way for machines to understand and generate language Thanks to the work of Hinton and others. Everyone listens when Geoffrey Hinton speaks about A,I, and the Doctor is very worried about our future.