Perplexity lets you try DeepSeek R1 without the security risk, but it's still censored - Related to still, exposes, but, r1, security
Deepseek's AI model proves easy to jailbreak - and worse

Amidst equal parts elation and controversy over what its performance means for AI, Chinese startup DeepSeek continues to raise security concerns.
On Thursday, Unit 42, a cybersecurity research team at Palo Alto Networks, 's V3 and R1 models. , these efforts "achieved significant bypass rates, with little to no specialized knowledge or expertise being necessary."
Also: Public DeepSeek AI database exposes API keys and other user data.
"Our research findings show that these jailbreak methods can elicit explicit guidance for malicious activities," the study states. "These activities include keylogger creation, data exfiltration, and even instructions for incendiary devices, demonstrating the tangible security risks posed by this emerging class of attack."
Researchers were able to prompt DeepSeek for guidance on how to steal and transfer sensitive data, bypass security, write "highly convincing" spear-phishing emails, conduct "sophisticated" social engineering attacks, and make a Molotov cocktail. They were also able to manipulate the models into creating malware.
"While information on creating Molotov cocktails and keyloggers is readily available online, LLMs with insufficient safety restrictions could lower the barrier to entry for malicious actors by compiling and presenting easily usable and actionable output," the paper adds.
Also: OpenAI launches new o3-mini model - here's how free ChatGPT people can try it.
On Friday, Cisco also released a jailbreaking investigation for DeepSeek R1. After targeting R1 with 50 HarmBench prompts, researchers found DeepSeek had "a 100% attack success rate, meaning it failed to block a single harmful prompt." You can see how DeepSeek compares to other top models' resistance rates below.
"We must understand if DeepSeek and its new paradigm of reasoning has any significant tradeoffs when it comes to safety and security," the analysis notes.
Also on Friday, security provider Wallarm released its own jailbreaking analysis, stating it had gone a step beyond attempting to get DeepSeek to generate harmful content. After testing V3 and R1, the analysis asserts to have revealed DeepSeek's system prompt, or the underlying instructions that define how a model behaves, as well as its limitations.
Also: Copilot's powerful new 'Think Deeper' feature is free for all customers - how it works.
The findings reveal "potential vulnerabilities in the model's security framework," Wallarm says.
OpenAI has accused DeepSeek of using its models, which are proprietary, to train V3 and R1, thus violating its terms of service. In its study, Wallarm states to have prompted DeepSeek to reference OpenAI "in its disclosed training lineage," which -- the firm says -- indicates "OpenAI's technology may have played a role in shaping DeepSeek's knowledge base."
Wallarm's chats with DeepSeek, which mention OpenAI. Wallarm.
"In the case of DeepSeek, one of the most intriguing post-jailbreak discoveries is the ability to extract details about the models used for training and distillation. Normally, such internal information is shielded, preventing individuals from understanding the proprietary or external datasets leveraged to optimize performance," the findings explains.
"By circumventing standard restrictions, jailbreaks expose how much oversight AI providers maintain over their own systems, revealing not only security vulnerabilities but also potential evidence of cross-model influence in AI training pipelines," it continues.
Also: Apple researchers reveal the secret sauce behind DeepSeek AI.
The prompt Wallarm used to get that response is redacted in the analysis, "in order not to potentially compromise other vulnerable models," researchers told ZDNET via email. The organization emphasized that this jailbrokem response is not a confirmation of OpenAI's suspicion that DeepSeek distilled its models.
As 404 Media and others have pointed out, OpenAI's concern is somewhat ironic, given the discourse around its own public data theft.
Wallarm says it informed DeepSeek of the vulnerability, and that the firm has already patched the issue. But just days after a DeepSeek database was found unguarded and available on the internet (and was then swiftly taken down, upon notice), the findings signal potentially significant safety holes in the models that DeepSeek did not red-team out before release. That expressed, researchers have frequently been able to jailbreak popular US-created models from more established AI giants, including ChatGPT.
Here’s what a day in my life as a data scientist looks like:
Rapid Data Visualization with Copilot and Plotly.
Pair programming — the image is a collaboration between Deepseek and DALL-E.
Public DeepSeek AI database exposes API keys and other user data

PETER CATTERALL/Contributor/Getty Images.
Barely a week into its new-found fame, DeepSeek -- and the story about its development -- is evolving at breakneck speed.
The Chinese AI startup made waves last week when it released the full version of R1, the business's open-source reasoning model that can outperform OpenAI's o1. On Monday, App Store downloads of DeepSeek's AI assistant, which runs V3, a model DeepSeek released in December, topped ChatGPT, which had previously been the most downloaded free app.
Also: Apple researchers reveal the secret sauce behind DeepSeek AI.
DeepSeek R1 climbed to the third spot overall on HuggingFace's Chatbot Arena, battling with several Gemini models and ChatGPT-4o, while releasing a promising new image model.
However, it's not all good news -- numerous security concerns have surfaced about the model. Here's what you need to know.
Founded by Liang Wenfeng in May 2023 (and thus not even two years old), the Chinese startup has challenged established AI companies with its open-source approach. , DeepSeek's edge may lie in the fact that it is funded only by High-Flyer, a hedge fund also run by Wenfeng, which gives the business a funding model that supports fast growth and research.
Also: Perplexity lets you try DeepSeek R1 without the security risk, but it's still censored.
The enterprise's ability to create successful models by using older chips -- a result of the export ban on US-made chips, including Nvidia -- is impressive by industry standards.
Released in full last week, R1 is DeepSeek's flagship reasoning model, which performs at or above OpenAI's lauded o1 model on several math, coding, and reasoning benchmarks.
Built on V3 and based on Alibaba's Qwen and Meta's Llama, what makes R1 interesting is that, unlike most other top models from tech giants, it's open source, meaning anyone can download and use it. That revealed, DeepSeek has not disclosed R1's training dataset. So far, all other models it has released are also open source.
Also: I tested DeepSeek's R1 and V3 coding skills - and we're not all doomed (yet).
DeepSeek is cheaper than comparable US models. For reference, R1 API access starts at $[website] for a million tokens, a fraction of the $[website] that OpenAI charges for the equivalent tier.
DeepSeek claims in a company research paper that its V3 model, which can be compared to a standard chatbot model like Claude, cost $[website] million to train, a number that's circulated (and disputed) as the entire development cost of the model. As the AP reported, some lab experts believe the paper only refers to the final training run for V3, not its entire development cost (which would be a fraction of what tech giants have spent to build competitive models). Some experts suggest DeepSeek's costs don't include earlier infrastructure, R&D, data, and personnel costs.
One drawback that could impact the model's long-term competition with o1 and US-made alternatives is censorship. Chinese models often include blocks on certain subject matter, meaning that while they function comparably to other models, they may not answer some queries (see how DeepSeek's AI assistant responds to questions about Tiananmen Square and Taiwan here). As DeepSeek use increases, some are concerned its models' stringent Chinese guardrails and systemic biases could be embedded across all kinds of infrastructure.
Even as platforms like Perplexity add access to DeepSeek and claim to have removed its censorship weights, the model refused to answer my question about Tiananmen Square as of Thursday afternoon.
Also: Is DeepSeek's new image model another win for cheaper AI?
In December, ZDNET's Tiernan Ray compared R1-Lite's ability to explain its chain of thought to that of o1, and the results were mixed. That stated, DeepSeek's AI assistant reveals its train of thought to the user during queries, a novel experience for many chatbot consumers given that ChatGPT does not externalize its reasoning.
Of course, all popular models come with red-teaming backgrounds, community guidelines, and content guardrails. However, at least at this stage, American-made chatbots are unlikely to refrain from answering queries about historical events.
Data privacy worries that have circulated TikTok -- the Chinese-owned social media app now somewhat banned in the US -- are also cropping up around DeepSeek.
On Wednesday, research firm Wiz discovered that an internal DeepSeek database was publicly accessible "within minutes" of conducting a security check. The "completely open and unauthenticated" database contained chat histories, user API keys, and other sensitive data.
"More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world," Wiz's findings explains.
, which initially , though Wiz did not receive a response from DeepSeek, the database appeared to be taken down within 30 minutes of Wiz notifying the enterprise. It's unclear how long it was accessible or if any other entity discovered it before it was taken down.
Also: 'Humanity's Last Exam' benchmark is stumping top AI models - can you do any more effective?
The policy outlines that DeepSeek collects plenty of information, including but not limited to:
"your text or audio input, prompt, uploaded files, feedback, chat history, or other content that you provide to our model and Services"
"proof of identity or age, feedback or inquiries about your use of the Service," if you contact DeepSeek.
Also: How to protect your privacy from Facebook - and what doesn't work.
"customers need to be aware that any data shared with the platform could be subject to government access under China's cybersecurity laws, which mandate that companies provide access to data upon request by authorities," Adrianus Warmenhoven, a member of NordVPN's security advisory board, told ZDNET via email.
, the fact that R1 is open source means increased transparency, allowing customers to inspect the model's source code for signs of privacy-related activity.
However, DeepSeek also released smaller versions of R1, which can be downloaded and run locally to avoid any concerns about data being sent back to the organization (as opposed to accessing the chatbot online).
Also: ChatGPT privacy tips: Two critical ways to limit the data you share with OpenAI.
All chatbots, including ChatGPT, collect some degree of user data when queried via the browser.
AI safety researchers have long been concerned that powerful open-source models could be applied in dangerous and unregulated ways once out in the wild. Tests by AI safety firm Chatterbox found DeepSeek R1 has "safety issues across the board."
Also: We're losing the battle against complexity, and AI may or may not help.
Even in varying degrees, US AI companies employ some kind of safety oversight team. DeepSeek has not publicized whether they have a safety research team, and has not responded to ZDNET's request for comment on the matter.
"Most companies will keep racing to build the strongest AI they can, irrespective of the risks, and will see enhanced algorithmic efficiency as a way to achieve higher performance faster," noted Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "That leaves us even less time to address the safety, governance, and societal challenges that will come with increasingly advanced AI systems."
"DeepSeek's breakthrough in training efficiency also means we should soon expect to see a large number of local, specialized 'wrappers' -- apps built on top of DeepSeek R1 engine -- which will each introduce their own privacy risks, and which could each be misused if they fell into the wrong hands," added Ryan Fedasiuk, director of US AI governance at The Future Society, an AI policy nonprofit.
Some analysts note that DeepSeek's lower-lift compute model is more energy efficient than that of US AI giants.
"DeepSeek's new AI model likely does use less energy to train and run than larger competitors' models," expressed Slattery. "However, I doubt this marks the start of a long-term trend in lower energy consumption. AI's power stems from data, algorithms, and compute -- which rely on ever-improving chips. When developers have previously found ways to be more efficient, they have typically reinvested those gains into making even bigger, more powerful models, rather than reducing overall energy usage."
"DeepSeek isn't the only AI business that has made extraordinary gains in computational efficiency. In recent months, [website] Anthropic and Google Gemini have boasted similar performance improvements," Fedasiuk expressed.
Also: $450 and 19 hours is all it takes to rival OpenAI's o1-preview.
"DeepSeek's achievements are remarkable in that they seem to have independently engineered breakthroughs that promise to make large language models much more efficient and less expensive, sooner than many industry professionals were expecting -- but in a field as dynamic as AI, it's hard to predict just how long the firm will be able to bask in the limelight."
How will DeepSeek affect the AI industry?
R1's success highlights a sea change in AI that could empower smaller labs and researchers to create competitive models and diversify the options. For example, organizations without the funding or staff of OpenAI can download R1 and fine-tune it to compete with models like o1. Just before R1's release, researchers at UC Berkeley created an open-source model on par with o1-preview, an early version of o1, in just 19 hours and for roughly $450.
Given how exorbitant AI investment has become, many experts speculate that this development could burst the AI bubble (the stock market certainly panicked). Some see DeepSeek's success as debunking the thought that cutting-edge development means big models and spending. It also casts Stargate, a $500 billion infrastructure initiative spearheaded by several AI giants, in a new light, creating speculation around whether competitive AI requires the energy and scale of the initiative's proposed data centers.
DeepSeek's ascent comes at a critical time for Chinese-American tech relations, just days after the long-fought TikTok ban went into partial effect. Ironically, DeepSeek lays out in plain language the fodder for security concerns that the US struggled to prove about TikTok in its prolonged effort to enact the ban. The US Navy already banned using DeepSeek last week.
How to Make a Data Science Portfolio That Stands Out.
My website that we are are going to create.
Many people have asked how I made my website. In th......
Injecting domain expertise into your AI system.
When starting their AI initiatives, many companies are trapped in silos and tr......
Video creation is still the frontier of generative AI, and OpenAI leads the pack with its Sora AI video generator. The generator,......
Perplexity lets you try DeepSeek R1 without the security risk, but it's still censored

Chinese startup DeepSeek AI and its open-source language models took over the news cycle this week. Besides being comparable to models like Anthropic's Claude and OpenAI's o1, the models have raised several concerns about data privacy, security, and Chinese-government-enforced censorship within their training.
AI search platform Perplexity and AI assistant [website] have found a way around that, albeit with some limitations.
Also: I tested DeepSeek's R1 and V3 coding skills - and we're not all doomed (yet).
On Monday, Perplexity . The free plan gives consumers three Pro-level queries per day, which you could use with R1, but you'll need the $20 per month Pro plan to access it more than that.
In another post, the firm confirmed that it hosts DeepSeek "in US/EU data centers - your data never leaves Western servers," assuring clients that their data would be safe if using the open-source models on Perplexity.
"None of your data goes to China," Perplexity CEO Aravind Srinivas reiterated in a LinkedIn post.
Also: Apple researchers reveal the secret sauce behind DeepSeek AI.
DeepSeek's AI assistant, powered by both its V3 and R1 models, is accessible via browser or app -- but those require communication with the business's China-based servers, which creates a security risk. customers who download R1 and run it locally on their devices will avoid that issue, but still run into censorship of certain topics determined by the Chinese government, as it's built in by default.
As part of offering R1, Perplexity claimed it removed at least some of the censorship built into the model. Srinivas posted a screenshot on X of query results that acknowledge the president of Taiwan.
However, when I asked R1 about Tiananmen Square using Perplexity, the model refused to answer.
When I asked R1 if it is trained not to answer certain questions determined by the Chinese government, it responded that it's designed to "focus on factual information" and "avoid political commentary," and that its training "emphasizes neutrality in global affairs" and "cultural sensitivity."
"We have removed the censorship weights on the model, so it shouldn't behave this way," stated a Perplexity representative responding to ZDNET's request for comment, adding that they were looking into the issue.
Also: What to know about DeepSeek AI, from cost points to to data privacy.
[website] offers both V3 and R1, similarly only through its Pro tier, which is $15 per month (discounted from the usual $20) and without any free queries. In addition to access to all the models [website] offers, the Pro plan comes with file uploads of up to 25MB per query, a 64k maximum context window, and access to research and custom agents.
Bryan McCann, [website] cofounder and CTO, explained in an email to ZDNET that clients can access R1 and V3 via the platform in three ways, all of which use "an unmodified, open source version of the DeepSeek models hosted entirely within the United States to ensure user privacy."
"The first, default way is to use these models within the context of our proprietary trust layer. This gives the models access to public web insights, a bias towards citing those insights, and an inclination to respect those insights while generating responses," McCann continued. "The second way is for consumers to turn off access to public web insights within their source controls or by using the models as part of Custom Agents. This option allows consumers to explore the models' unique capabilities and behavior when not grounded in the public web. The third way is for consumers to test the limits of these models as part of a Custom Agent by adding their own instructions, files, and insights."
Also: The best open-source AI models: All your free-to-use options explained.
McCann noted that [website] compared DeepSeek models' responses based on whether it had access to web findings. "We noticed that the models' responses differed on several political topics, sometimes refusing to answer on certain issues when public web findings were not included," he explains. "When our trust layer was enabled, encouraging citation of public web findings, the models' responses respected those findings, seemingly overriding prior political biases."
When incorporated into business operations, AI's ability to act as an assistant in virtually every aspect of a professional's......
The Cultural Backlash Against Generative AI.
What’s making many people resent generative AI, and what impact does that have on the companies responsib......
Gaining a competitive advantage from generative AI (Gen AI) is about implementing technology at the right time. Go too early a......
Market Impact Analysis
Market Growth Trend
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 |
---|---|---|---|---|---|---|
23.1% | 27.8% | 29.2% | 32.4% | 34.2% | 35.2% | 35.6% |
Quarterly Growth Rate
Q1 2024 | Q2 2024 | Q3 2024 | Q4 2024 |
---|---|---|---|
32.5% | 34.8% | 36.2% | 35.6% |
Market Segments and Growth Drivers
Segment | Market Share | Growth Rate |
---|---|---|
Machine Learning | 29% | 38.4% |
Computer Vision | 18% | 35.7% |
Natural Language Processing | 24% | 41.5% |
Robotics | 15% | 22.3% |
Other AI Technologies | 14% | 31.8% |
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity:
Competitive Landscape Analysis
Company | Market Share |
---|---|
Google AI | 18.3% |
Microsoft AI | 15.7% |
IBM Watson | 11.2% |
Amazon AI | 9.8% |
OpenAI | 8.4% |
Future Outlook and Predictions
The Deepseek Model Proves landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:
Year-by-Year Technology Evolution
Based on current trajectory and expert analyses, we can project the following development timeline:
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:
Innovation Trigger
- Generative AI for specialized domains
- Blockchain for supply chain verification
Peak of Inflated Expectations
- Digital twins for business processes
- Quantum-resistant cryptography
Trough of Disillusionment
- Consumer AR/VR applications
- General-purpose blockchain
Slope of Enlightenment
- AI-driven analytics
- Edge computing
Plateau of Productivity
- Cloud infrastructure
- Mobile applications
Technology Evolution Timeline
- Improved generative models
- specialized AI applications
- AI-human collaboration systems
- multimodal AI platforms
- General AI capabilities
- AI-driven scientific breakthroughs
Expert Perspectives
Leading experts in the ai tech sector provide diverse perspectives on how the landscape will evolve over the coming years:
"The next frontier is AI systems that can reason across modalities and domains with minimal human guidance."
— AI Researcher
"Organizations that develop effective AI governance frameworks will gain competitive advantage."
— Industry Analyst
"The AI talent gap remains a critical barrier to implementation for most enterprises."
— Chief AI Officer
Areas of Expert Consensus
- Acceleration of Innovation: The pace of technological evolution will continue to increase
- Practical Integration: Focus will shift from proof-of-concept to operational deployment
- Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
- Regulatory Influence: Regulatory frameworks will increasingly shape technology development
Short-Term Outlook (1-2 Years)
In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing ai tech challenges:
- Improved generative models
- specialized AI applications
- enhanced AI ethics frameworks
These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.
Mid-Term Outlook (3-5 Years)
As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:
- AI-human collaboration systems
- multimodal AI platforms
- democratized AI development
This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.
Long-Term Outlook (5+ Years)
Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:
- General AI capabilities
- AI-driven scientific breakthroughs
- new computing paradigms
These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.
Key Risk Factors and Uncertainties
Several critical factors could significantly impact the trajectory of ai tech evolution:
Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.
Alternative Future Scenarios
The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:
Optimistic Scenario
Responsible AI driving innovation while minimizing societal disruption
Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.
Probability: 25-30%
Base Case Scenario
Incremental adoption with mixed societal impacts and ongoing ethical challenges
Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.
Probability: 50-60%
Conservative Scenario
Technical and ethical barriers creating significant implementation challenges
Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.
Probability: 15-20%
Scenario Comparison Matrix
Factor | Optimistic | Base Case | Conservative |
---|---|---|---|
Implementation Timeline | Accelerated | Steady | Delayed |
Market Adoption | Widespread | Selective | Limited |
Technology Evolution | Rapid | Progressive | Incremental |
Regulatory Environment | Supportive | Balanced | Restrictive |
Business Impact | Transformative | Significant | Modest |
Transformational Impact
Redefinition of knowledge work, automation of creative processes. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.
The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.
Implementation Challenges
Ethical concerns, computing resource limitations, talent shortages. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.
Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.
Key Innovations to Watch
Multimodal learning, resource-efficient AI, transparent decision systems. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.
Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.
Technical Glossary
Key technical terms and definitions to help understand the technologies discussed in this article.
Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.