Categories
Mobile Syrup

Bill Gates thinks pausing AI development isn’t a viable option

A recent open letter penned by Elon Musk, Apple co-founder Steve Wozniak, and several other notable AI researchers addressed AI labs globally, attempting to pause Giant AI experiments, quoting profound risks to society and humanity.

Bill Gates, however, wasn’t on the list of signatories, and now he’s speaking out against those trying to pause AI development.

As reported by Reuters, Gates believes that pausing the development of AI won’t “solve the challenges” that lie ahead, and pausing development on a global scale will be difficult. Instead, we should focus on how to best use the developments in AI. “I don’t think asking one particular group to pause solves the challenges,” said Gates. “Clearly there’s huge benefits to these things… what we need to do is identify the tricky areas.”

Gates, who is now a full-time philanthropist with his ex-wife Melinda Gates, has been a vocal supporter of AI and its potential to solve some of the world’s biggest problems. In a blog post dated March 21, a day before the open letter, he wrote that AI could help reduce inequalities in health, education and agriculture. He also alluded to AI being as revolutionary as the internet in the 1990s.

Gates said he was optimistic about the future of AI and that he believed humans could control it. He said he did not think AI would make itself smarter without human input, as some have feared.

Gates’ Microsoft has also long been a supporter of OpenAI, and its investment is a clear indication that he believes in the importance of AI in the future of technology.

Source: Reuters

Categories
Mobile Syrup

Apple co-founder, Musk and more call for AI development pause

Elon Musk and other well-known AI researchers have penned an open letter addressed to AI labs globally to pause the development of large-scale AI systems.

“Pause Giant AI Experiments” is the title of the open letter, and it is co-signed by Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, Pinterest co-founder Evan Sharp, Stability AI CEO Emad Mostaque, and several other notable AI researchers. 

The letter says that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.” The letter says that working on such technology requires labs to take meticulous precaution measures and should be planned for and managed with “commensurate care and resources.”

However, that level of planning and management is not happening, and rather, AI labs are in a race to develop and deploy the large-scale AI systems that “no one – not even their creators – can understand, predict, or reliably control.”

Therefore, the letter suggests AI labs should “immediately pause” the training of AI systems more powerful than GPT-4 for at least six months. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

The letter also quotes OpenAI and its recent statement regarding artificial general intelligence, stating that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

The letter offers a solution. A set of shared safety protocols for advanced AI design and development should be put in place, and they must be rigorously audited and overseen by independent outside experts. This does not mean that AI development halts, it just means that it takes a temporary pause until more favourable safety protocols can be put in place.

You can check out the full letter here.

Image credit: Shutterstock

Source: Future of Life