Biden lays down the law on AI

In a sweeping executive order, US President Joseph R. Biden Jr. on Monday set up a comprehensive series of standards, safety and privacy protections, and oversight measures for the development and use of artificial intelligence (AI).

Among more than two dozen initiatives, Biden’s “Safe, Secure, and Trustworthy Artificial Intelligence” order was a long time coming, according to many observers who’ve been watching the AI space — especially with the rise of generative AI (genAI) in the past year.

To read this article in full, please click here

Read more

Q&A: How one CSO secured his environment from generative AI risks

In February, travel and expense management company Navan (formerly TripActions) chose to go all-in on generative AI technology for a myriad of business and customer assistance uses.

The Palo Alto, CA company turned to ChatGPT from OpenAI and coding assistance tools from GitHub Copilot to write, test, and fix code; the decision has boosted Navan’s operational efficiency and reduced overhead costs.

GenAI tools have also been used to build a conversational experience for the company’s client virtual assistant, Ava. Ava, a travel and expense chatbot assistant, offers customers answers to questions and a conversational booking experience. It can also offer data to business travelers, such as company travel spend, volume, and granular carbon emissions details.

To read this article in full, please click here

Read more

Apple toughens up app security with API control

Apple is at war with device fingerprinting — the use of fragments of unique device-specific information to track users online. This fall, it will put in place yet another important limitation to prevent unauthorized use of this kind of tech.

Apple at WWDC 2023 announced a new initiative designed to make apps that do track users more obvious while giving users additional transparency into such use. Now it has told developers a little more about how this will work in practice.

To read this article in full, please click here

Read more

Apple beefs up enterprise identity, device management

Last week at WWDC, Apple introduced new capabilities related to Managed Apple IDs and to user identity overall.

Managed Apple IDs have been around for some time. They handle many of the same tasks as personal Apple IDs, but are owned by an organization rather than the end user and are typically created alongside a user’s enterprise identity through federated authentication with a company’s identity provider. 

Managed IDs allow a user to activate and use an Apple device — whether company owned or personal BYOD— and create a business profile on employee devices. Additionally, they provide Apple services including some core iCloud functionality such as backing up the work-related content on the device and syncing app data from Mail, Calendar, Contacts, and Notes. They also allow IT to manage what resources and devices a user can access, reset passwords, and help with Apple device management.

To read this article in full, please click here

Read more

Why Apple's iOS 16.6 upgrade will be talk of the town

Apple’s big developer event is approaching, and it looks as if the company will press home its message on privacy as it begins to seed support for the AR operating systems it’s now expected to announce there.

Apple wants to get you updating

As of now, the Worldwide Developer Conference (WWDC) starting June 5 seems set to see Apple introduce its first mixed reality glasses, likely called RealityPro. These will be accompanied by an operating system that recent patent filings suggest will be called xrOS or xrProOS. The event will also see Apple introduce new iterations of its other operating systems, which developers will be able to work with soon after the show.

To read this article in full, please click here

Read more

Q&A: At MIT event, Tom Siebel sees ‘terrifying’ consequences from using AI

Speakers ranging from artificial intelligence (AI) developers to law firms grappled this week with questions about the efficacy and ethics of AI during MIT Technology Review’s EmTech Digital conference. Among those who had a somewhat alarmist view of the technology (and regulatory efforts to rein it in) was Tom Siebel, CEO C3 AI and founder of CRM vendor Siebel Systems.

Siebel was on hand to talk about how businesses can prepare for an incoming wave of AI regulations, but in his comments Tuesday he touched on various facets of the debate of generative AI, including the ethics of using it, how it could evolve, and why it could be dangerous.

To read this article in full, please click here

Read more

Generative AI is about to destroy your company. Will you stop it?

Credit to Author: eschuman@thecontentfirm.com| Date: Mon, 01 May 2023 10:21:00 -0700

As the debate rages about how much IT admins and CISOs should use generative AI — especially for coding — SailPoint CISO Rex Booth sees far more danger than benefit, especially given the industry’s less-than-stellar history of making the right security decisions.

Google has already decided to publicly leverage generative AI in its searches, a move that is freaking out a wide range of AI specialists, including a senior manager of AI at Google itself

To read this article in full, please click here

Read more

Do the productivity gains from generative AI outweigh the security risks?

Credit to Author: eschuman@thecontentfirm.com| Date: Fri, 21 Apr 2023 08:08:00 -0700

There’s no doubt generative AI models such as ChatGPT, BingChat, or GoogleBard can deliver massive efficiency benefits — but they bring with them major cybersecurity and privacy concerns along with accuracy worries. 

It’s already known that these programs — especially ChatGPT itself — make up facts and repeatedly lie. Far more troubling, no one seems to understand why and how these lies, coyly dubbed “hallucinations,” are happening. 

In a recent 60 Minutes interview, Google CEO Sundar Pichai explained: “There is an aspect of this which we call — all of us in the field — call it as a ‘black box.’ You don’t fully understand. And you can’t quite tell why it said this.”

To read this article in full, please click here

Read more

Do you really know what’s inside your iOS and Android apps?

It’s time to audit your code, as it appears that some no/low code features used in iOS or Android apps may not be as secure as you thought. That’s the big take away from a report explaining that disguised Russian software is being used in apps from the US Army, CDC, the UK Labour party, and other entities.

When Washington becomes Siberia

What’s at issue is that code developed by a company called Pushwoosh has been deployed within thousands of apps from thousands of entities. These include the Centers for Disease Control and Prevention (CDC), which claims it was led to believe Pushwoosh was based in Washington when the developer is, in fact, based in Siberia, Reuters explains. A visit to the Pushwoosh Twitter feed shows the company claiming to be based in Washington, DC.

To read this article in full, please click here

Read more