G7 leaders warn of AI dangers, say the time to act is now

Leaders of the Group of Seven (G7) nations on Saturday called for the creation of technical standards to keep artificial intelligence (AI) in check, saying AI has outpaced oversight for safety and security.

Meeting in Hiroshima, Japan, the leaders said nations must come together on a common vision and goal of trustworthy AI, even while those solutions may vary. But any solution for digital technologies such as AI should be “in line with our shared democratic values,” they said in a statement.

To read this article in full, please click here

Read more

Apple bans employees from using ChatGPT. Should you?

Read more

Senate hearings see a clear and present danger from AI — and opportunities

There are vital national interests in advancing artificial intelligence (AI) to streamline public services and automate mundane tasks performed by government employees. But the government lacks in both IT talent and systems to support those efforts.

“The federal government as a whole continues to face barriers in hiring, managing, and retaining staff with advanced technical skills — the very skills needed to design, develop, deploy, and monitor AI systems,” said Taka Ariga, chief data scientist at the US Government Accountability Office.

Daniel Ho, associate director for Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, agreed, saying that by one estimate the federal government would need to hire about 40,000 IT workers to address cybersecurity issues posed by AI.

To read this article in full, please click here

Read more

Q&A: At MIT event, Tom Siebel sees ‘terrifying’ consequences from using AI

Speakers ranging from artificial intelligence (AI) developers to law firms grappled this week with questions about the efficacy and ethics of AI during MIT Technology Review’s EmTech Digital conference. Among those who had a somewhat alarmist view of the technology (and regulatory efforts to rein it in) was Tom Siebel, CEO C3 AI and founder of CRM vendor Siebel Systems.

Siebel was on hand to talk about how businesses can prepare for an incoming wave of AI regulations, but in his comments Tuesday he touched on various facets of the debate of generative AI, including the ethics of using it, how it could evolve, and why it could be dangerous.

To read this article in full, please click here

Read more

As Europeans strike first to rein in AI, the US follows

A proposed set of rules by the European Union would, among other things. require makers of generative AI tools such as ChatGPT,to publicize any copyrighted material used by the technology platforms to create content of any kind.

A new draft of European Parliament’s legislation, a copy of which was attained by The Wall Street Journal, would allow the original creators of content used by generative AI applications to share in any profits that result.

To read this article in full, please click here

Read more

Tech bigwigs: Hit the brakes on AI rollouts

More than 1,100 technology luminaries, leaders, and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.

In an open letter published by Future of Life Institute, a nonprofit organization with the mission to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined other signatories in agreeing AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

To read this article in full, please click here

Read more

Samsung shows we need an Apple approach to generative AI

It feels as if practically everyone has been using Open AI’s ChatGPT since the generative AI hit prime time. But many enterprise professionals may be embracing the technology without considering the risk of these large language models (LLMs).

That’s why we need an Apple approach to Generative AI.

What happens at Samsung should stay at Samsung

ChatGPT seems to be a do-everything tool, capable of answering questions, finessing prose, generating suggestions, creating reports, and more. Developers have used the tool to help them write or improve their code and some companies (such as Microsoft) are weaving this machine intelligence into existing products, web browsers, and applications.

To read this article in full, please click here

Read more

Legislation to rein in AI’s use in hiring grows

Organizations are rapidly adopting the use of artificial intelligence (AI) for the discovery, screening, interviewing, and hiring of candidates. It can reduce time and work needed to find job candidates and it can more accurately match applicant skills to a job opening.

But legislators and other lawmakers are concerned that using AI-based tools to discover and vet talent could intrude on job seekers’ privacy and may introduce racial- and gender-based biases already baked into the software.

“We have seen a substantial groundswell over the past two to three years with regard to legislation and regulatory rule-making as it relates to the use of AI in various facets of the workplace,” said Samantha Grant, a partner with the law firm of Reed Smith. 

To read this article in full, please click here

Read more

Tech big wigs: Hit the brakes on AI rollouts

More than 1,100 technology luminaries, leaders and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.

In an open letter published by The Future of Life Institute, a nonprofit organization that aims is to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak, SpaceX and Tesla CEO Elon Musk, and MIT Future of Life Institute President Max Tegmark joined other signatories in saying AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

To read this article in full, please click here

Read more