DeepSeek App Transmits Sensitive User and Device Data Without Encryption - Related to apple’s, transmits, do, where, awareness:
AI decision-making: Where do businesses draw the line?

“A computer can never be held accountable, therefore a computer must never make a management decision.”.
Artificial intelligence (AI) adoption is on the rise. , 42% of enterprises have actively deployed AI, and 40% are experimenting with the technology. Of those using or exploring AI, 59% have accelerated their investments and rollouts over the past two years. The result is an uptick in AI decision-making that leverages intelligent tools to arrive at (supposedly) accurate answers.
Rapid adoption, however. Raises a question: Who’s responsible if AI makes a poor choice? Does the fault lie with IT teams? Executives? AI model builders? Device manufacturers?
Additionally, in this piece, we’ll explore the evolving world of AI and reexamine the quote above in the context of current use cases: Do companies still need a human in the loop, or can AI make the call?
Getting it right: Where AI is improving business outcomes.
Guy Pearce, principal consultant at DEGI and. Member of the ISACA working trends group, has been involved with AI for more than three decades. “First, it was symbolic,” he says, “and now it’s statistical. It’s algorithms and models that allow data processing and improve business performance over time.”.
Data from IBM’s recent AI in Action investigation displays the impact of this shift. Two-thirds of leaders say that AI has driven more than a 25% improvement in revenue growth rates, and 72% say that the C-suite is fully aligned with IT leadership about what comes next on the path to AI maturity.
With confidence in AI growing. Enterprises are implementing intelligent tools to improve business outcomes. For example, wealth management firm Consult Venture Partners deployed AIda AI, a conversational digital AI concierge that uses IBM watsonx assistant technology to answer potential clients’ questions without the need for human agents.
The results speak for themselves: Alda AI answered 92% of queries correctly, 47% of queries led to webinar registrations and. 39% of inquiries turned into leads.
Missing the mark: What happens if AI makes mistakes?
92% is an impressive achievement for Alda AI. The caveat? It was still wrong 8% of the time. So, what happens when AI makes mistakes?
He uses the example of a financial firm leveraging AI to evaluate credit scores and issue loans. The outcomes of these decisions are relatively low stakes. In the best-case scenario, AI approves loans that are paid back on time and in full. In the worst case, borrowers default, and companies need to pursue legal action. While inconvenient, the negative outcomes are far outweighed by the potential positives.
“When it comes to high stakes,” says Pearce, “look at the medical industry. Let’s say we use AI to address the problem of wait times. Do we have sufficient data to ensure patients are seen in the right order? What if we get it wrong? The outcome could be death.”.
As a result, how AI is used in decision-making depends largely on what it’s making decisions about and how these decisions impact both the enterprise making the decisions and. Those the decision affects.
In some cases, even the worst-case scenario is a minor inconvenience. In others, the results could cause significant harm.
Taking the blame: Who’s accountable if AI gets it wrong?
In April 2024, a Tesla operating in “full self-driving” mode struck and. Killed a motorcyclist. The driver of the vehicle admitted to looking at their phone prior to the crash despite active driver supervision being required.
So who takes the blame? The driver is the obvious choice and was arrested on charges of vehicular homicide.
But this isn’t the only path to accountability. There’s also a case to be made in which Tesla bears some responsibility since the business’s AI algorithm failed to spot the victim. Blame could also be placed on governing bodies such as the National Highway Traffic Safety Administration (NHTSA). Perhaps their testing wasn’t rigorous or complete enough.
One could even argue that the creator(s) of Tesla’s AI could be held liable for letting code that could kill someone go live.
This is the paradox of AI decision-making: Is someone at fault. Or is everyone at fault? “If you bring all the stakeholders together who should be accountable, where does that accountability lie?” asks Pearce. “With the C-suite? With the whole team? If you have accountability that’s spread over the entire organization, everyone can’t end up in jail. Ultimately, shared accountability often leads to no accountability.”.
So, where do organizations draw the line? Where does AI insight give way to human decision-making?
Three considerations are key: Ethics, risk and trust.
“When it comes to ethical dilemmas,” says Pearce. “AI can’t do it.” This is because intelligent tools naturally seek the most efficient path, not the most ethical. As a result, any decision involving ethical questions or concerns should include human oversight.
Risk, meanwhile, is an AI specialty. “AI is good in risk,” Pearce says. “What statistical models do is give you something called a standard error, which lets you know if what AI is recommending has a high or low potential variability.” This makes AI great for risk-based decisions like those in finance or insurance.
Finally. Enterprises need to prioritize trust. “There are declining levels of trust in institutions,” says Pearce. “Many citizens don’t feel confident that the data they share is being used in a trustworthy manner.”.
Should AI be used for management decisions? Maybe. Will it be used to make some of these decisions? Almost certainly. The draw of AI — its ability to capture, correlate and analyze multiple data sets and deliver new insights — makes it a powerful tool for enterprises to streamline operations and. Reduce costs.
What’s less clear is how the shift to management-level decision-making will impact accountability. , current conditions create “blurry lines” in this area; legislation hasn’t kept pace with increasing AI usage.
To ensure alignment with ethical principles, reduce the risk of wrong choices and engender stakeholder and. Customer trust, businesses are best served by keeping humans in the loop. Maybe this means direct approval from staff is required before AI can act. Maybe it means the occasional review and evaluation of AI decision-making outcomes.
Whatever approach enterprises choose, however. The core message remains the same: When it comes to AI-driven decisions, there’s no hard-and-fast line. It’s a moving target, one defined by possible risk, potential reward and. Probable outcomes.
In den Universal-Druckertreibern für PCL6 und Postscript von HP klaffen kritische Sicherheitslücken. Angreifer können dadurch Schadcode einschleusen u...
Der Passwort-Manager Bitwarden soll besser geschützte Zugänge erhalten. Nutzerinnen und Nutzer, die bislang keine Zwei-Faktor-Authentifizierung aktivi...
Software vendor Trimble is warning that hackers are exploiting a Cityworks deserialization vulnerability to remotely execute commands on IIS servers a...
Cybersecurity awareness: Apple’s cloud-based AI security system

The rising influence of artificial intelligence (AI) has many organizations scrambling to address the new cybersecurity and data privacy concerns created by the technology. Especially as AI is used in cloud systems. Apple addresses AI’s security and privacy issues head-on with its Private Cloud Compute (PCC) system.
Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure on which to run generative AI (genAI) models that need more processing power than its devices could supply while also protecting user privacy, stated a ComputerWorld article.
Apple is opening the PCC system to security researchers to “learn more about PCC and. Perform their own independent verification of our asserts,” the corporation introduced. In addition, Apple is also expanding its Apple Security Bounty.
What does this mean for AI security going forward? Security Intelligence spoke with Ruben Boonen, CNE Capability Development Lead at IBM, to learn what researchers think about PCC and Apple’s approach.
SI: ComputerWorld reported this story, saying that Apple hopes that “the energy of the entire infosec community will combine to help build a moat to protect the future of AI.” What do you think of this move?
Boonen: I read the ComputerWorld article and. Reviewed Apple’s own statements about their private cloud. I think what Apple has done here is good. I think it goes beyond what other cloud providers do because Apple is providing an insight into some of the internal components they use and are basically telling the security community. You can have a look at this and see if it is secure or not.
Also good from the perspective that AI is constantly getting bigger as an industry. Bringing generative AI components into regular consumer devices and getting people to trust their data with AI services is a really good step.
SI: What do you see as the pros of Apple’s approach to securing AI in the cloud?
Boonen: Other cloud providers do provide high-security guarantees for data that’s stored on their cloud. Many businesses, including IBM, trust their corporate data to these cloud providers. But a lot of times, the processes to secure data aren’t visible to their consumers; they don’t explain exactly what they do. The biggest difference here is that Apple is providing this transparent environment for customers to test that plane.
Boonen: Currently, the most capable AI models are very big. And that makes them very useful. But when we want AI on consumer devices, there’s a tendency for vendors to ship small models that can’t answer all questions. So it relies on the larger models in the cloud. That comes with additional risk. But I think it is inevitable that the whole industry will be moving to that cloud model for AI. Apple is implementing this now because they want to give consumers trust to the AI process.
SI: Apple’s system doesn’t play well with other systems and. Products. How will Apple’s efforts to secure AI in the cloud benefit other systems?
Boonen: They are providing a design template that other providers like Microsoft, Google and. Amazon can then replicate. I think it is mostly effective as an example for other providers to say maybe we should implement something similar and. Provide similar testing capabilities for our end-consumers. So I don’t think this directly impacts other providers except to push them to be more transparent in their processes.
It’s also key to mention Apple’s Bug Bounty as they invite researchers in to look at their system. Apple has a history of not doing very well with security, and there have been cases in the past where they’ve refused to pay out bounties for issues found by the security community. So I’m not sure they’re doing this entirely out of the interest of attracting researchers, but also in part of convincing their end-individuals that they are doing things securely.
That being introduced, having read their design documentation, which is extensive. I think they’re doing a pretty good job in addressing security around AI in the cloud.
Microsoft has shared a workaround for individuals affected by a known issue that blocks Windows security updates from deploying on some Windows 11 24H2 syst...
Netzwerkadmins mit BIG-IP-Appliances sollten die weiterführenden Informationen zum Quartalssicherheitsupdates von F5 studieren. Die Entwickler haben d...
Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from out...
DeepSeek App Transmits Sensitive User and Device Data Without Encryption

A new audit of DeepSeek's mobile app for the Apple iOS operating system has found glaring security issues, the foremost being that it sends sensitive data over the internet sans any encryption, exposing it to interception and manipulation attacks.
The assessment comes from NowSecure, which also found that the app fails to adhere to best security practices and. That it collects extensive user and device data.
"The DeepSeek iOS app sends some mobile app registration and device data over the Internet without encryption," the corporation mentioned. "This exposes any data in the internet traffic to both passive and. Active attacks."
The teardown also revealed several implementation weaknesses when it comes to applying encryption on user data. This includes the use of an insecure symmetric encryption algorithm (3DES), a hard-coded encryption key, and the reuse of initialization vectors.
What's more, the data is sent to servers that are managed by a cloud compute and storage platform named Volcano Engine. Which is owned by ByteDance, the Chinese firm that also operates TikTok.
"The DeepSeek iOS app globally disables App Transport Security (ATS) which is an iOS platform level protection that prevents sensitive data from being sent over unencrypted channels," NowSecure mentioned. "Since this protection is disabled, the app can (and does) send unencrypted data over the internet."
The findings add to a growing list of concerns that have been raised around the artificial intelligence (AI) chatbot service, even as it skyrocketed to the top of the app store charts on both Android and iOS in several markets across the world.
Cybersecurity enterprise Check Point stated that it observed instances of threat actors leveraging AI engines from DeepSeek, alongside Alibaba Qwen and OpenAI ChatGPT, to develop information stealers, generate uncensored or unrestricted content, and optimize scripts for mass spam distribution.
"As threat actors utilize advanced techniques like jailbreaking to bypass protective measures and develop info stealers, financial theft, and spam distribution, the urgency for organizations to implement proactive defenses against these evolving threats ensures robust defenses against potential misuse of AI technologies," the enterprise noted.
Earlier this week, the Associated Press revealed that DeepSeek's website is configured to send user login information to China Mobile, a state-owned telecommunications organization that has been banned from operating in the United States.
The app's Chinese links, much like TikTok, have prompted lawmakers to push for a nation-wide ban on DeepSeek from government devices over risks that it could provide user information to Beijing.
It's worth noting that several countries, including Australia, Italy, the Netherlands, Taiwan, and South Korea, and government agencies in India and the United States, such as the Congress. NASA, Navy, Pentagon, and Texas, have instituted bans on DeepSeek from government devices.
DeepSeek's explosion in popularity has also led to it battling malicious attacks, with Chinese cybersecurity firm XLab telling Global Times that the service has been subjected to sustained distributed denial-of-service (DDoS) attacks originating from Mirai botnets hailBot and RapperBot late last month.
Meanwhile, cybercriminals are wasting no time to capitalize on the frenzy surrounding DeepSeek to set up lookalike pages that propagate malware, fake investment scams, and fraudulent cryptocurrency schemes.
Software vendor Trimble is warning that hackers are exploiting a Cityworks deserialization vulnerability to remotely execute commands on IIS servers a...
In cybersecurity, too often, the emphasis is placed on advanced technology meant to shield digital infrastructure from external threats. Yet, an equal...
Legaltech-Startups bieten ihren Kunden juristische Dienstleistungen an, die teil- oder vollautomatisiert sind und können so hocheffizient viele Fälle ...
Market Impact Analysis
Market Growth Trend
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 |
---|---|---|---|---|---|---|
8.7% | 10.5% | 11.0% | 12.2% | 12.9% | 13.3% | 13.4% |
Quarterly Growth Rate
Q1 2024 | Q2 2024 | Q3 2024 | Q4 2024 |
---|---|---|---|
12.5% | 12.9% | 13.2% | 13.4% |
Market Segments and Growth Drivers
Segment | Market Share | Growth Rate |
---|---|---|
Network Security | 26% | 10.8% |
Cloud Security | 23% | 17.6% |
Identity Management | 19% | 15.3% |
Endpoint Security | 17% | 13.9% |
Other Security Solutions | 15% | 12.4% |
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity:
Competitive Landscape Analysis
Company | Market Share |
---|---|
Palo Alto Networks | 14.2% |
Cisco Security | 12.8% |
Crowdstrike | 9.3% |
Fortinet | 7.6% |
Microsoft Security | 7.1% |
Future Outlook and Predictions
The Decision Making Where landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:
Year-by-Year Technology Evolution
Based on current trajectory and expert analyses, we can project the following development timeline:
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:
Innovation Trigger
- Generative AI for specialized domains
- Blockchain for supply chain verification
Peak of Inflated Expectations
- Digital twins for business processes
- Quantum-resistant cryptography
Trough of Disillusionment
- Consumer AR/VR applications
- General-purpose blockchain
Slope of Enlightenment
- AI-driven analytics
- Edge computing
Plateau of Productivity
- Cloud infrastructure
- Mobile applications
Technology Evolution Timeline
- Technology adoption accelerating across industries
- digital transformation initiatives becoming mainstream
- Significant transformation of business processes through advanced technologies
- new digital business models emerging
- Fundamental shifts in how technology integrates with business and society
- emergence of new technology paradigms
Expert Perspectives
Leading experts in the cyber security sector provide diverse perspectives on how the landscape will evolve over the coming years:
"Technology transformation will continue to accelerate, creating both challenges and opportunities."
— Industry Expert
"Organizations must balance innovation with practical implementation to achieve meaningful results."
— Technology Analyst
"The most successful adopters will focus on business outcomes rather than technology for its own sake."
— Research Director
Areas of Expert Consensus
- Acceleration of Innovation: The pace of technological evolution will continue to increase
- Practical Integration: Focus will shift from proof-of-concept to operational deployment
- Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
- Regulatory Influence: Regulatory frameworks will increasingly shape technology development
Short-Term Outlook (1-2 Years)
In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing cyber security challenges:
- Technology adoption accelerating across industries
- digital transformation initiatives becoming mainstream
These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.
Mid-Term Outlook (3-5 Years)
As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:
- Significant transformation of business processes through advanced technologies
- new digital business models emerging
This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.
Long-Term Outlook (5+ Years)
Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:
- Fundamental shifts in how technology integrates with business and society
- emergence of new technology paradigms
These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.
Key Risk Factors and Uncertainties
Several critical factors could significantly impact the trajectory of cyber security evolution:
Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.
Alternative Future Scenarios
The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:
Optimistic Scenario
Rapid adoption of advanced technologies with significant business impact
Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.
Probability: 25-30%
Base Case Scenario
Measured implementation with incremental improvements
Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.
Probability: 50-60%
Conservative Scenario
Technical and organizational barriers limiting effective adoption
Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.
Probability: 15-20%
Scenario Comparison Matrix
Factor | Optimistic | Base Case | Conservative |
---|---|---|---|
Implementation Timeline | Accelerated | Steady | Delayed |
Market Adoption | Widespread | Selective | Limited |
Technology Evolution | Rapid | Progressive | Incremental |
Regulatory Environment | Supportive | Balanced | Restrictive |
Business Impact | Transformative | Significant | Modest |
Transformational Impact
Technology becoming increasingly embedded in all aspects of business operations. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.
The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.
Implementation Challenges
Technical complexity and organizational readiness remain key challenges. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.
Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.
Key Innovations to Watch
Artificial intelligence, distributed systems, and automation technologies leading innovation. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.
Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.
Technical Glossary
Key technical terms and definitions to help understand the technologies discussed in this article.
Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.