Attacks against personal data are up 300%, Apple warns

Read more

Q&A: Cisco CIO sees AI embedded in every product and process

Less than a year after OpenAI’s ChatGPT was released to the public, Cisco Systems is already well into the process of embedding generative artificial intelligence (genAI) into its entire product portfolio and internal backend systems.

The plan is to use it in virtually every corner of the business, from automating network functions and monitoring security to creating new software products.

But Cisco’s CIO, Fletcher Previn, is also dealing with a scarcity of IT talent to create and tweak large language model (LLM) platforms for domain-specific AI applications. As a result, IT workers are learning as they go, while discovering new places and ways the ever-evolving technology can create value.

To read this article in full, please click here

Read more

Biden lays down the law on AI

In a sweeping executive order, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).

Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to many observers who’ve been watching the AI space — especially with the rise of generative AI (genAI) in the past year.

To read this article in full, please click here

Read more

Apple’s latest China App Store problem is a warning for us all

Read more

Homeland Security confirms your privacy is no longer safe

The big problem with privacy is that once you relinquish some of it, you never get it back. What makes it worse is when those who are supposed to protect your rights choose to undermine them. When they do so, they eat away at the thin protections we should all enjoy in the digital age.

US agencies’ illegal use of smartphone data

These are some of the reasons to be so concerned to learn from a newly released US Department of Homeland Security report that multiple US government agencies illegally used smartphone location data, breaching privacy regulations as they did. To do this, they purchased smartphone location data, including Advertising Identifiers (AdIDs) from data brokers that had been harvested from a wide range of apps.

To read this article in full, please click here

Read more

Google to block Bard conversations from being indexed on Search

Alphabet-owned Google is working on blocking user conversations with its new Bard generative AI assistant from being indexed on its Search platform or showing up as results.

“Bard allows people to share chats, if they choose. We also don’t intend for these shared chats to be indexed by Google Search. We’re working on blocking them from being indexed now,” Google’s Search Liaison account posted on Twitter, now X.

The internet search giant was responding to an SEO Consultant who pointed out on Twitter that user conversations with Bard were being indexed on Google Search.

To read this article in full, please click here

Read more

Q&A: How one CSO secured his environment from generative AI risks

In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

To read this article in full, please click here

Read more

Why and how to create corporate genAI policies

As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data.

For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.”

To read this article in full, please click here

Read more

Zoom goes for a blatant genAI data grab; enterprises, beware (updated)

Credit to Author: eschuman@thecontentfirm.com| Date: Thu, 17 Aug 2023 07:06:00 -0700

When Zoom amended its terms of service earlier this month — a bid to make executives comfortable that it wouldn’t use Zoom data to train generative AI models — it quickly stirred up a hornet’s nest. So the company “revised” the terms of service, and left in place ways it can still get full access to user data.

Computerworld repeatedly reached out to Zoom without success to clarify what the changes really mean.

Editor’s note: Shortly after this column was published, Zoom again changed its terms and conditions. We’ve added an update to the end of the story covering the latest changes.

Before I delve into the legalese — and Zoom’s weasel words to falsely suggest it was not doing what it obviously was doing — let me raise a more critical question: Is there anyone in the video-call business not doing this? Microsoft? Google? Those are two firms that never met a dataset that they didn’t love.

To read this article in full, please click here

Read more

As VR headset adoption grows, privacy issues could emerge

Head and hand motion data gathered from virtual reality (VR) headsets could be as effective at identifying individuals as fingerprints or face scans, research studies have shown, potentially compromising user privacy when interacting in immersive virtual environments.

Two recent studies by researchers at the University of California, Berkeley, showed how data gathered by VR headsets could be used to identify individuals with a high level of accuracy, and potentially reveal a host of personal attributes, including height, weight, age, and even marital status, according to a Bloomberg report Thursday.

To read this article in full, please click here

Read more