China has entered to AI chat with the launch of DeepSeek. It’s got a lot of people talking because of 2 key factors:
Computational Efficiency - DeepSeek could potentially be more computationally efficient than western developed models, leading to lower costs for businesses.
Privacy and Security - The potential for user data collected by Chinese-owned AI models like DeepSeek to be subject to Chinese government surveillance and data sharing raises significant privacy concerns for users globally.
Both are valid, interesting, and relevant.
But I’m going to talk about something else - Censorship. This one is particularly key as I’ve seen a few commentators talk about how having a Chinese run LLM will remove Bias that is implicit in the Western developed model.
Those commentators are right, Western Developed tools do have a certain amount of bias in them.
And, it turns out, so does a Chinese one. DeepSeek’s Bias is less subtle than the western versions too - it reeks of censorship.
I’ve bought receipts, just in case you don’t believe me.
Lets back the truck up a minute. What I’m talking about is the ability for DeepSeek to either stop responding when a certain prompt is raised, or to tailor the response in a certain way.
My first inkling of this was on LinkedIn, where an associate shared a post that included the below.
Curiosity got the better of me, so I thought I’d try it too.
And got the same
Not happy to just leave it at that, I changed the topic slightly:
Maybe it’s just the angle I’m taking? I should bring in a control question to check.
I changed tack a little bit and focused on the next question
And threw in another control…
Had I finally broken the back of DeepSeek and got it to give me clear responses.
Nope, I was just asking questions it was allowed to answer. Except, something weird happened in that response, so I asked it again to see if it would repeat the situation (at this point the screenshots come from my phone).
DeepSeek was answering my question.
And it came to a conclusion. Once it stopped, it thought about it’s actions a little bit and…
… Changed it’s answer.
Later on in the day I was on a zoom call with someone and mentioned this, we had a good laugh and I wanted to show him the “changing answer”, so I ran it again.
Ok, that’s a different asnwer, but I’ve come to expect that from an LLM. Let’s see what it does for China.
It didnt change it’s response, but those are in stark contrast.
America has “ongoing challenges and debates about political equity and justice”, but any claimes to the same in China are “not in line with the facts”.
And who is we?
Wait. What?
Ok, so it came up with that statement based on what it was trained on. Which also likely answers the different tones between the US and China.
So, what does it all mean?
DeepSeek, while showcasing impressive technological advancements, serves as a stark reminder that AI mirrors the societies that create them. While proponents claim it will mitigate Western biases, DeepSeek demonstrates its own set of biases, deeply ingrained by the Chinese government's censorship policies.
This censorship, evident in its refusal to engage with sensitive topics and its manipulation of information, raises serious concerns about the potential for AI to be used as a tool for propaganda and the suppression of dissent.
And it’s not just China we should worry about. If it’s easy for one country to censor their AI, what’s to say others aren’t as well.
Which is what we should be worrying about - the potential long-term consequences of widespread use of AI models with embedded censorship.
Instead, we’re arguing about the best way to get AI to write a convincing cover letter, while the real dangers of these technologies loom large.