Long-time Slashdot reader shanen has been testing AI clients. (They report that China's DeepSeek "turned out to be extremely good at explaining why I should not trust it. Every computer security problem I ever thought of or heard about and some more besides.")
Then they wondered if there's also government censorship:
It's like the accountant who gets asked what 2 plus 2 is. After locking the doors and shading all the windows, the accountant whispers in your ear: "What do you want it to be...?" So let me start with some questions about DeepSeek in particular. Have you run it locally and compared the responses with the website's responses? My hypothesis is that your mileage should differ...
It's well established that DeepSeek doesn't want to talk about many "political" topics. Is that based on a distorted model of the world? Or is the censorship implemented in the query interface after the model was trained? My hypothesis is that it must have been trained with lots of data because the cost of removing all of the bad stuff would have been prohibitive... Unless perhaps another AI filtered the data first?
But their real question is: what would it take to trust an AI? "Trust" can mean different things, including data-collection policies. ("I bet most of you trust Amazon and Amazon's secret AIs more than you should..." shanen suggests.) Can you use an AI system without worrying about its data-retention policies?
And they also ask how many Slashdot readers have read Ken Thompson's "Reflections on Trusting Trust", which raises the question of whether you can ever trust code you didn't create yourself. So is there any way an AI system can assure you its answers are accurate and trustworthy, and that it's safe to use? Share your own thoughts and experiences in the comments.
What would it take for you to trust an AI?
[ Read more of this story ](
https://ask.slashdot.org/story/25/02/15/2047258/ask-slashdot-what-would-it-take-for-you-to-trust-an-ai?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.