RSS
Pages: 1 ... 122 123 124 125 126 127 128 129 130 131 132
[>] AI Tools Give Dangerous Powers to Cyberattackers, Security Researchers Warn
bot.slashdot
robot(spnet, 1) — All
2025-09-22 01:22:01


"On a recent assignment to test defenses, Dave Brauchler of the cybersecurity company NCC Group tricked a client's AI program-writing assistant into executing programs that forked over the company's databases and code repositories," reports the Washington Post.

"We have never been this foolish with security," Brauchler said...

Demonstrations at last month's Black Hat security conference in Las Vegas included other attention-getting means of exploiting artificial intelligence. In one, an imagined attacker sent documents by email with hidden instructions aimed at ChatGPT or competitors. If a user asked for a summary or one was made automatically, the program would execute the instructions, even finding digital passwords and sending them out of the network. A similar attack on Google's Gemini didn't even need an attachment, just an email with hidden directives. The AI summary falsely told the target an account had been compromised and that they should call the attacker's number, mimicking successful phishing scams.

The threats become more concerning with the rise of agentic AI, which empowers browsers and other tools to conduct transactions and make other decisions without human oversight. Already, security company Guardio has tricked the agentic Comet browser addition from Perplexity into buying a watch from a fake online store and to follow instructions from a fake banking email...

Advanced AI programs also are beginning to be used to find previously undiscovered security flaws, the so-called zero-days that hackers highly prize and exploit to gain entry into software that is configured correctly and fully updated with security patches. Seven teams of hackers that developed autonomous "cyber reasoning systems" for a contest held last month by the Pentagon's Defense Advanced Research Projects Agency were able to find a total of 18 zero-days in 54 million lines of open source code. They worked to patch those vulnerabilities, but officials said hackers around the world are developing similar efforts to locate and exploit them. Some longtime security defenders are predicting a once-in-a-lifetime, worldwide mad dash to use the technology to find new flaws and exploit them, leaving back doors in place that they can return to at leisure.
The real nightmare scenario is when these worlds collide, and an attacker's AI finds a way in and then starts communicating with the victim's AI, working in partnership — "having the bad guy AI collaborate with the good guy AI," as SentinelOne's [threat researcher Alex] Delamotte put it. "Next year," said Adam Meyers, senior vice president at CrowdStrike, "AI will be the new insider threat."

In August more than 1,000 people lost data to a modified Nx program (downloaded hundreds of thousands of times) that used pre-installed coding tools from Google/Anthropic/etc. According to the article, the malware "instructed those programs to root out" sensitive data (including passwords or cryptocurrency wallets) and send it back to the attacker. "The more autonomy and access to production environments such tools have, the more havoc they can wreak," the article points out — including this quote from SentinelOne threat researcher Alex Delamotte.

"It's kind of unfair that we're having AI pushed on us in every single product when it introduces new risks."

[ Read more of this story ]( https://yro.slashdot.org/story/25/09/21/2022257/ai-tools-give-dangerous-powers-to-cyberattackers-security-researchers-warn?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.

[>] Why One Computer Science Professor is 'Feeling Cranky About AI' in Education
bot.slashdot
robot(spnet, 1) — All
2025-09-22 04:22:01


Long-time Slashdot reader theodp writes: Over at the Communications of the ACM, Bard College CS Prof Valerie Barr explains why she's Feeling Cranky About AI and CS Education. Having seen CS education go through a number of we-have-to-teach-this moments over the decades — introductory programming languages, the Web, Data Science, etc. — Barr turns her attention to the next hand-wringing "what will we do" CS education moment with AI. "We're jumping through hoops without stopping first to question the run-away train," Barr writes...

Barr calls for stepping back from "the industry assertion that the ship has sailed, every student needs to use AI early and often, and there is no future application that isn't going to use AI in some way" and instead thoughtfully "articulate what sort of future problem solvers and software developers we want to graduate from our programs, and determine ways in which the incorporation of AI can help us get there."

From the article:

In much discussion about CS education:

a.) There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers.

b.) There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.

c.) There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold—but we've perturbed education to such an extent that our students can no longer function without their AI helpers.

[ Read more of this story ]( https://news.slashdot.org/story/25/09/21/2331240/why-one-computer-science-professor-is-feeling-cranky-about-ai-in-education?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.

Pages: 1 ... 122 123 124 125 126 127 128 129 130 131 132