- In this artificial intelligence roundup, we highlight the major AI stories of the last month.
- Top news: AI experts call for a pause in development. Generative AI could automate his 300 million jobs. AI helps healthcare workers screen for cancer.
1. Tech leaders are calling for a pause in training AI systems
A leading figure in artificial intelligence and digital technology has published an open letter calling for a six-month moratorium on developing AI systems more powerful than OpenAI’s ChatGPT 4.
The signatories to the letter issued by the Future of Life Institute said, “Advanced AI has the potential to profoundly change the history of life on Earth and should be planned and managed with due care and resources. ” warns.
The letter has been signed by more than 1,400 people, including Apple co-founder Steve Wozniak, Turing Award winner Professor Joshua Bengio and Stuart Russell, director of the Center for Intelligent Systems at Berkeley University.
The letter was also signed by Elon Musk, co-founder of OpenAI, developer of ChatGPT. Musk’s foundation also funds the organization that issued the letter. Many researchers at Alphabet’s DeepMind have added their names to the signature list.
The letter accuses AI labs of rushing into developing systems with superior-than-human intelligence without properly weighing the potential risks and consequences to humanity.
“Over the past few months, AI labs have been embroiled in an uncontrollable race to develop and deploy ever more powerful digital minds. or cannot be controlled with certainty,” the letter said.
The letter’s signatories urge AI developers to work with governments and policymakers to create robust regulatory and governance systems for AI.
In an apparent response to the open letter, OpenAI CEO Sam Altman, whose AI development has been led by ChatGPT-4 in recent months, posted this tweet:
This tweet is essentially a summary of a blog post by Altman dated February 24, 2023. On his blog, Altman says his company’s mission is to “ensure artificial general intelligence.” [AGI] – AI systems that are generally smarter than humans benefit humanity as a whole. “
Altman also acknowledged the potential risks of hyperintelligent AI systems, citing “misuse, dramatic accidents, and social disruption.”OpenAI’s CEO says the company’s approach to mitigating these risks explained in detail.
“As our systems move closer to AGI, we are becoming more and more cautious in creating and deploying our models. Our decisions require far more care than society normally applies to new technologies.” The AI community believes that the risks of AGI (and its successor systems) are fictitious, and if they turn out to be correct, we is happy, but behaves as if these risks exist.”
2. Up to 300 million jobs could be affected by AI: Goldman Sachs
The advent of generative AI has raised fundamental questions in the minds of millions of workers. Will machines do my job? A research paper published by investment bank Goldman Sachs offers some odds on how machines might rule. The paper says that “300 million full-time jobs could be automated” if generative AI delivers the promised capabilities.
The researchers looked at jobs in the US and Europe to establish that figure. The bottom line is that two-thirds of current jobs in these regions are jeopardized by some degree of automation, and generative AI could take over a quarter of current jobs.
These findings mirror World Economic Forum research. future job reportfound that by 2025, humans and machines will spend an equal amount of time doing work.
This is not necessarily bad news for workers. A Gartner survey found that 70% of his employees want his AI to help them with specific tasks at work.
Replace SHAREABLE with the PHOTOSHOPPED version without typos.
As the chart above shows, workers want AI to do some of the heavy lifting of data processing, digital tasks, and information discovery. Automating problem solving and workplace safety is also on her AI wishlist for workers.
Goldman Sachs estimates that AI could ultimately boost global GDP by 7% if it reaches a level that greatly assists us in our work.
3. News Brief: AI Stories Around the World
The UK government has released a white paper detailing a new regulatory approach to AI. The government says its AI regulation strategy is based on five principles, including safety, transparency and accountability. We have no plans to create a dedicated AI regulator. Instead, existing bodies such as the Health and Safety Administration and the Human Rights Commission will oversee AI development and integration. Critics of the proposal told the BBC that the government’s approach lacked legal authority and warned of “serious gaps” in the proposed regulatory framework.
China’s Baidu has unveiled its long-awaited artificial intelligence-powered chatbot, Erniebot. Reuters reported on the launch, showing a short video of Arnie performing mathematical calculations, speaking in a dialect of Chinese, and generating videos and images with text prompts. Baidu is seen as a leader in the race in China between tech giants and start-ups to develop ChatGPT rivals.
An AI program will reportedly assist UK hospital medical staff with the task of checking breast screening scans for signs of cancer. Times newspaper. The AI will work with human researchers at Leeds Teaching Hospitals NHS Trust to check the mammograms of about 7,000 patients. In the exam, two doctors and her one AI will check the scan slides. If everyone agrees that there are no signs of cancer, the patient is completely cleared. If any of the three do not match, the scan may be reviewed again and the patient may be referred for further treatment.
With OpenAI’s ChatGPT-4, Microsoft has launched a tool to help cybersecurity professionals identify breaches, threat signals, and better analyze data. The tool, named Security Copilot, is a simple prompt box that assists security analysts with tasks such as summarizing incidents, analyzing vulnerabilities, and sharing information with colleagues on pinboards. The assistant uses Microsoft’s security-specific model, which the company describes as a “growing set of security-specific skills” with more than 65 trillion signals fed every day.
The World Economic Forum’s platform for shaping the future of artificial intelligence and machine learning brings together stakeholders around the world to accelerate the adoption of transparent and inclusive AI, empowering technology in a safe, ethical and responsible manner. allow it to expand.
Contact us for details on how to participate.
4. Agenda AI Details
Developing AI is as fundamental as creating microprocessors, personal computers, the Internet, and mobile phones, says Bill Gates. The Microsoft founder and philanthropist says AI will change the way people work, learn, travel, care, and communicate. The entire industry will pivot around it. Businesses are differentiated by how well they use it. Read more about Bill Gates by clicking the link above.
What do experts in the AI research field think our future will look like when we coexist and work together with hyperintelligent technologies? Artificial intelligence beyond our own intelligence may sound like science fiction. But it may soon become part of our daily lives. The charts in this article show the views of 356 experts as machines get smarter by the day.
Artificial intelligence has reached a tipping point of sorts, capturing the imagination of everyone from students to the leaders of the world’s biggest tech companies. Excitement is growing around the possibilities AI tools unlock, but exactly what these tools enable and how they work is still not widely understood. But given how sophisticated tools like ChatGPT have become, it seems right to let generative AI explain what it is.