Scientists brew “quantum ink” to power next-gen night vision

Toxic metals are pushing infrared detector makers into a corner, but NYU Tandon researchers have developed a cleaner solution using colloidal quantum dots. These detectors are made like “inks,” allowing scalable, low-cost production while showing impressive infrared sensitivity. Combined with transparent electrodes, the innovation tackles major barriers in imaging systems and could bring infrared technology to cars, medicine, and consumer devices.

Read more

Caltech’s massive 6,100-qubit array brings the quantum future closer

Caltech scientists have built a record-breaking array of 6,100 neutral-atom qubits, a critical step toward powerful error-corrected quantum computers. The qubits maintained long-lasting superposition and exceptional accuracy, even while being moved within the array. This balance of scale and stability points toward the next milestone: linking qubits through entanglement to unlock true quantum computation.

Read more

Scientists build micromotors smaller than a human hair

Using laser light instead of traditional mechanics, researchers have built micro-gears that can spin, shift direction, and even power tiny machines. These breakthroughs could soon lead to revolutionary medical tools working at the scale of cells.

Read more

Experts Flag Security, Privacy Risks in DeepSeek AI App

Credit to Author: BrianKrebs| Date: Thu, 06 Feb 2025 21:12:30 +0000

New mobile apps from the Chinese artificial intelligence (AI) company DeepSeek have remained among the top three “free” downloads for Apple and Google devices since their debut on Jan. 25, 2025. But experts caution that many of DeepSeek’s design choices — such as using hard-coded encryption keys, and sending unencrypted user and device data to Chinese companies — introduce a number of glaring security and privacy risks.

Read more

Researchers, legal experts want AI firms to open up for safety checks

More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.

The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.

To read this article in full, please click here

Read more