Researchers, legal experts want AI firms to open up for safety checks

More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.

The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.

To read this article in full, please click here

Read more

The arrival of genAI could cover critical skills gaps, reshape IT job market

Generative artificial intelligence (genAI) is likely to play a critical role in addressing skills shortages in today’s marketplace, according to a new study by London-based Kaspersky Research. It showed that 40% of 2,000 C-level executives surveyed plan to use genAI tools such as ChatGPT to cover critical skills shortages through the automation of tasks.

The European-based study found genAI to be firmly on the business agenda, with 95% of respondents regularly discussing ways to maximize value from the technology at the most senior level, even as 91% admitted they don’t really know how it works.

To read this article in full, please click here

Read more

GenAI is highly inaccurate for business use — and getting more opaque

Large language models (LLMs), the algorithmic platforms on which generative AI (genAI) tools like ChatGPT are built, are highly inaccurate when connected to corporate databases and becoming less transparent, according to two studies.

One study by Stanford University showed that as LLMs continue to ingest massive amounts of information and grow in size, the genesis of the data they use is becoming harder to track down. That, in turn, makes it difficult for businesses to know whether they can safely build applications that use commercial genAI foundation models and for academics to rely on them for research.

To read this article in full, please click here

Read more

White House to issue AI rules for federal employees

After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.

The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.

On Tuesday night, the White House sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event Monday hosted by President Joseph R. Biden Jr., according to The Washington Post.

To read this article in full, please click here

Read more

Q&A: How one CSO secured his environment from generative AI risks

In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

To read this article in full, please click here

Read more

ServiceNow embeds AI-powered customer-assist features throughout products

Read more

Why and how to create corporate genAI policies

As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data.

For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.”

To read this article in full, please click here

Read more

Q&A: TIAA's CIO touts top AI projects, details worker skills needed now

Artificial intelligence (AI) is already having a significant effect on businesses and organizations across a variety of industries, even as many businesses are still just kicking the tires on the technology.

Those that have fully adopted AI claim a 35% increase in innovation and a 33% increase in sustainability over the past three years, according to research firm IDC. Customer and employee retention has also been reported as improving by 32% after investing in AI.

To read this article in full, please click here

Read more

Governments worldwide grapple with regulation to rein in AI dangers

Ever since generative AI exploded into public consciousness with the launch of ChatGPT at the end of last year, calls to regulate the technology to stop it from causing undue harm have risen to fever pitch around the world. The stakes are high — just last week, technology leaders signed an open public letter saying that if government officials get it wrong, the consequence could be the extinction of the human race.

To read this article in full, please click here

Read more

ChatGPT creators and others plead to reduce risk of global extinction from their tech

Hundreds of tech industry leaders, academics, and others public figures signed an open letter warning that artificial intelligence (AI) evolution could lead to an extinction event and saying that controlling the tech should be a top global priority.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by San Francisco-based Center for AI Safety.

The brief statement in the letter reads almost like a mea culpa for the technology about which its creators are now joining together to warn the world.

To read this article in full, please click here

Read more