Technology News from Around the World, Instantly on Oracnoos!

Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm - Related to heat, iot's, loss, key, edge

Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm

Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm

Chip designer Arm has a new edge AI platform optimized for the Internet of Things (IoT) that expands the size of AI models that can run on edge devices, includes a powerful new CPU, and enables developers to integrate more easily with popular AI frameworks.

It’s the first such platform based on the corporation’s v9 architecture and boasts numbers such as an eight-fold improvement in machine learning performance over Arm’s previous platform and a 70% improvement in IoT performance.

Arm’s new platform marks at least the third move by a chip player this week to expand their presence at the edge, where the push is on to bring as much compute power, AI capabilities, data processing and analysis tools, and security attributes to where much of the data today is being created.

“We can only realize the potential of AI if we move it to the physical devices and the environments that surround us,” Paul Williamson, senior vice president and general manager of Arm’s IoT line of business, told journalists. “In the world of IoT, it’s AI at the edge that matters most. Just a few years ago, edge AI workloads were much simpler than today. For example, they were focused on basic noise reduction or anomaly detection. But now the workloads have become much more complex and they’re trying to meet the demands of much more sophisticated use cases.”.

Intel this week introduced the latest additions to its Xeon 6 processor lineup, including a system-on-a-chip (SoC) aimed at AI workloads at the edge and in networks and featuring integrated acceleration, connectivity, and security technologies to enable more workloads to run on fewer, smaller systems.

For its part, Qualcomm, known for its Snapdragon line of power-efficient chips for smartphones and PCs, introduced a new product brand portfolio — Dragonwing — for industrial and embedded IoT, networking, and cellular use cases ranging from energy and utilities to retail, manufacturing, telecommunications and supply chain.

“Leading edge AI, high-performance, low-power computing and unrivaled connectivity are built into custom hardware, software and service offerings designed for speed, scalability and reliability,” Don McGuire, senior vice president and chief marketing officer for Qualcomm, wrote in a blog post.

Much of this is driven by the enterprise adoption of the edge and IoT, connected devices that can range from massive industrial systems on manufacturing floors and smaller servers on distant oil rigs to autonomous vehicles, small sensors on windmills, and everything in between. And their numbers are growing, from 18 billion last year to [website] billion by 2033, .

Chip makers are building more powerful — and power-efficient — CPUs, GPUs, and NPUs (neural processing units) to run in small and more capable systems from hardware makers to meet the rapidly growing demand for more compute, data processing, and security capabilities where the data is being created to reduce the latency and costs that come with sending massive amounts of data to the cloud. Now AI models and workloads are making their way to the edge, and all this is driving developers to develop AI and other software for the edge.

“We’re seeing the need for higher performance and superior efficiency to run the latest AI models, frameworks, and agents,” Arm’s Williamson showcased. “We’re seeing the need for improved security to protect the high-value software surrounding those. And we’re seeing the need for developers to be able to enhancement, refine, upgrade their software once it’s been deployed in the field.”.

In use cases like industrial automation, smart cities, smart homes, “the value of AI inferencing at the edge is becoming more and more evident,” he revealed.

Arm’s new v9 platform is designed to address much of that, creating the capability to run AI models with over 1 billion parameters on a device. It includes the designer’s new highly efficient Cortex-A320 CPU and Ethos-U85 edge accelerator and performance-enhancing tools like Scalar Vector Extension (SVE) 2 for machine learning jobs, support for BFloat16 for new data types, and Matrix Multiply Instructions for more efficient AI processing.

The [website] architecture also superior addresses security issues key to computing at the edge. attributes include Pointer Authentication (PAC), Branch Target Identification (BTI) and Memory Tagging Extension (MTE) that enable more memory safety, control-flow integrity, and software isolation.

“This isn’t just an incremental step forward,” Williamson unveiled. “It represents a fundamental shift in how we’re approaching edge computing and AI processing. We believe it’s going to drive forward that edge AI revolution for years to come.”.

A key is that the latest platform removes the need for a microcontroller, he mentioned, adding that the solution last year “focused on transforming network execution. This year, we’ve taken Ethos-U85 and we’ve updated the drive so that it can be driven directly by a Cortex-A320 without the need of a Cortex-M in the loop. This will improve latency and allow Arm’s partners to remove the cost and complexity of using these separate controllers to drive the NPU.”.

Memory is also a key improvement, with the Cortex-A320 adding support for larger addressable memory than Cortex-M platforms. The CPU is also more flexible at handling multiple tiers of memory access latency, enabling the platform to handle edge AI use cases that have larger neural networks and need software flexibility.

“The continued demand for hardware to efficiently execute larger and multi-model networks is pushing memory size requirements, so systems with improved memory access performance are becoming really necessary to perform these more complex use cases,” he expressed.

For software developers, flexibility is the word. Arm for years has been building IoT development platforms, continuing that last year with the introduction of Kleidi aimed at accelerating AI development with Arm’s CPU architecture. The first offerings through the program were KleidiAI libraries for AI frameworks and KleidiCV for computer vision jobs. With the v9 platform comes Kleidi for IoT. KleidiAI already is integrated into IoT frameworks like [website] and ExecuTorch to speed up the performance of models like Meta’s Llama and Microsoft’s Phi-3.

It delivers as much as 70% improvement to the Cortex-320 when running Microsoft’s Tiny Stories dataset on [website], .

In addition, Cortex-A320 can run applications that use real-time operating systems, like Free RTOS and Zephyr, Williamson mentioned. That mentioned, through Arm’s A-Profile architecture there also is out-of-the-box support for Linux and portability capabilities for Android and other rich Oses.

“This brings unprecedented levels of flexibility and allows you to target multiple market segments, applications, or operating system offerings that our partners provide and gives you superb choice when you’re thinking about roadmaps for future products,” he noted. “For developers working on Linux, they can easily and quickly deploy that rich operating system on the A320. That’s going to save them time, money and effort, leading to faster time-to-market for them and their products.”.

Developers can take PyTorch applications at high-level environments and deploy them at the edge via the accelerations in the Cortex-A320 CPU.

“We also allowed, with the implementation of the direct connect of the neural processor to the A-Class core, the ability for them for the first time to directly address the same memory system as the AI accelerator for these sorts of always-on tasks, which will make that development easier as well,” Williamson stated.

With all that, “you will see some interesting, completely new configurations from people stretching the boundary of what would have previously been done in a microcontroller but also giving Linux-based developers optimized performance,” he noted.

Psychological safety isn’t about fluffy “niceness” — it is the foundation of agile teams that innovate, adapt, and deliver.

Distributed systems are at the heart of modern applications, enabling scalability, fault tolerance, and high availability. One of the key challenges i......

Google in the recent past unveiled quantum-safe digital signatures in its Cloud Key Management Service (Cloud KMS), aligning with the National Institute of Stan......

Exploring IoT's Top WebRTC Use Cases

Exploring IoT's Top WebRTC Use Cases

Around the world, 127 new devices are connected to the Internet every second. That translates to 329 million new devices hooked up to the Internet of Things (IoT) every month. The IoT landscape is expanding by the day, and, consequently, novel ways of running an IoT network are also evolving. An emerging area of interest is developing new ways of sharing data between IoT devices, like transmitting a video feed from a surveillance camera to a phone.

One well-known way to transmit data is with Web Real-Time Communication (WebRTC), a technology that enables web applications and physical devices to capture and stream media, as well as to exchange data between browsers and devices without requiring an intermediary. For developers creating a primarily audio- or video-based application, WebRTC is one of the best options available.

Here, I’ll explain when you should use WebRTC and some use cases, ranging from the practical to the creative.

As its full name states, WebRTC enables real-time communication by creating direct peer-to-peer connections between devices. This design eliminates the need for centralized servers, which in turn reduces delays and ensures faster data exchange. By connecting devices directly, WebRTC minimizes the time required for information to travel, making it ideal for applications requiring quick responses.

To maintain smooth performance, WebRTC dynamically adjusts the quality of audio and video streams based on network conditions. If bandwidth decreases, it lowers the bitrate to avoid interruptions, and when the connection improves, it increases the bitrate to enhance quality. This adaptability ensures a more consistent experience even in fluctuating network environments.

WebRTC works well with advanced media codecs like VP8 for video and Opus for audio. A codec is a tool that encodes and decodes data, turning raw audio or video signals into compressed formats that can be sent over networks. These codecs reduce the size of the data streams without sacrificing much quality, making it possible to send high-quality audio and video while using less bandwidth. For IoT devices like cameras or microphones, this is essential to keep communication clear and reliable, even when network conditions aren’t perfect.

WebRTC use cases are particularly suited for IoT applications requiring high-quality, low-latency communication. While it’s widely recognized for audio and video streaming, WebRTC also supports sending other types of data, such as sensor readings or control signals.

Here are three situations in which WebRTC excels:

Audio/visual applications. Devices that require real-time streaming capabilities can use WebRTC to ensure smooth, high-quality video and audio transmission. Data transmission. WebRTC allows IoT devices to send and receive data that isn’t audio or video, such as sensor readings or device updates. For example, a smart thermostat could share temperature readings with other devices in a home automation system or receive adjustment commands directly from a user, all without a centralized server. Real-time control. Remote commands for IoT devices, such as locking/unlocking doors or operating a robotic device, benefit from WebRTC’s low-latency capabilities.

In essence, WebRTC can handle both high-quality media streaming and efficient data sharing, making it a versatile tool for IoT developers.

When it comes to imagining use cases for WebRTC, the possibilities are really endless. Most developers who use WebRTC are already very familiar with common use cases like home video surveillance, doorbell cameras, and dashcams, so I’m going to focus on less well-known applications that might not immediately come to mind.

From streamlining package deliveries to revolutionizing agriculture, WebRTC empowers IoT devices to offer real-time visibility and control, demonstrating its versatility in a wide range of scenarios. Here are some of the more diverse and innovative applications of WebRTC in the IoT world:

A smart mailbox equipped with a camera and WebRTC technology can instantly notify homeowners when packages are delivered, sending real-time alerts to their smartphones or other connected devices. This system can monitor not only the arrival of deliveries but also detect signs of theft or tampering.

WebRTC-enabled cameras in greenhouses or on agricultural fields can provide farmers with the ability to remotely monitor crop health and environmental conditions. These cameras can stream live footage, allowing farmers to visually assess plant growth, check for signs of pests or disease, and ensure irrigation systems are functioning properly. WebRTC also supports the integration of sensor data, such as soil moisture or temperature, so farmers can receive comprehensive updates and make timely decisions.

Fish tank enthusiasts can use WebRTC-enabled cameras to check on their fish remotely. These setups can monitor water levels and ensure automatic feeders are functioning properly, providing peace of mind while consumers are away from home.

Motion-activated cameras powered by WebRTC can be installed in natural habitats, such as forests or gardens, to capture wildlife sightings and behavioral patterns. These cameras enable researchers or nature enthusiasts to monitor animals in real time without disturbing the natural environment. With WebRTC, the footage can stream directly to smartphones or computers, allowing remote observation.

WebRTC-enabled fisheye cameras in weather stations can provide visual data on climate conditions, while sensor data can monitor metrics like humidity, rainfall, temperature, etc. The combination of video and sensor data improves the accuracy of weather forecasts, particularly in extreme or rapidly changing weather situations.

Beekeepers can use WebRTC-powered internal cameras to monitor the conditions inside beehives without disturbing the bees. These cameras allow beekeepers to observe hive behavior, such as the health of the queen, the activity of worker bees, and the presence of pests, all from a distance. WebRTC’s low-latency streaming makes it possible to track these conditions in real time, offering insights into hive activity.

Additionally, temperature, humidity, and weight sensors integrated into the beehive can be monitored through WebRTC, providing a full picture of hive health and helping beekeepers take timely action.

Sensors in the home can monitor light conditions, temperature, etc., and automatically adjust utilities based on preprogrammed instructions. Moreover, if a room system detects that no one is present, it can automatically adjust the heating or lighting to conserve energy.

The ability of WebRTC to provide real-time, secure, and high-quality data exchange offers new possibilities for creativity in IoT. Its versatility makes it ideal for innovation, offering developers the freedom to think beyond traditional limitations. By adopting WebRTC, IoT applications can evolve into smarter, faster, and more reliable systems in places never thought possible — like the inside of a beehive.

Minha experiência com programação mudou completamente no dia em que eu descobri e aprendi a usar o Docker. Conseguir subir e gerenciar serviços na min......

Key Takeaways Selling yourself and your stakeholders on doing architectural experiments is hard, despite the significant benefits of this approach; yo......

One would question, why should I worry about what is happening behind the scenes as long as my model is able to deliver high-precision results for me?......

Loss Functions: The Key to Improving AI Predictions

Loss Functions: The Key to Improving AI Predictions

We can put an actual number on it. In machine learning, a loss function tracks the degree of error in the output from an AI model by quantifying the difference or the loss between a predicted value and the actual value. If the model’s predictions are accurate, the difference between these two numbers — the loss — is small. If the predictions are inaccurate, the loss is larger.

For example, a colleague built an AI model to forecast how many views his videos would receive on YouTube. The model was fed YouTube titles and forecasted the number of views the video would receive in its first week. When comparing the model’s forecasts to the actual number of views, the predictions were not very accurate. The model predicted that the cold brew video would bomb and that the pour-over guide video would be a hit, but this wasn’t the case. This is a hard problem to solve, and loss functions can help improve the model.

Loss functions define how well a model is performing mathematically. By calculating loss, we can adjust model parameters to see if the loss increases (worsens) or decreases (improves). A machine learning model is considered sufficiently trained when the loss is minimized below a predefined threshold. At a high level, loss functions fall into two categories: regression loss functions and classification loss functions.

Regression loss functions measure errors in continuous value predictions, such as house prices, temperature, or YouTube video views. These functions must be sensitive to both whether the forecast is correct and the degree to which it diverges from the ground truth.

The most common regression loss function is Mean Squared Error (MSE), calculated as the average squared difference between the predicted and true values across all training examples.

Squaring the error gives large mistakes a disproportionately heavy impact on overall loss, strongly penalizing outliers.

MAE, on the other hand, measures the average absolute difference between the predicted and actual values. Unlike MSE, MAE does not square the errors, making it less sensitive to outliers.

Choosing between MSE and MAE depends on the nature of the data. If there are a few extreme outliers, such as temperature ranges in July in the southern [website], MSE is a good choice since it heavily penalizes large deviations. However, if the data contains outliers that should not overly influence the model, such as occasional surges in product sales, MAE is a enhanced option.

Hubber Loss provides a compromise between MSE and MAE, acting like MSE for small errors and MAE for large errors. This makes it useful when penalizing large errors is necessary, but not too harshly.

For the YouTube example, the MAE value summed up to an average prediction error of 16,000 views per video. The MSE loss function skyrocketed to over 400 million due to the squaring of large errors. The Huber loss also indicated poor predictions but provided a more balanced perspective, penalizing large errors less severely than MSE. However, these loss values are only meaningful when used to adjust model parameters and observe improvements.

Classification loss functions, in contrast to regression loss functions, measure accuracy in categorical predictions. These functions assess how well predicted probabilities or labels match actual categories, such as determining whether an email is spam or not.

Cross-entropy is the most widely used classification loss function, measuring how uncertain a model’s predictions are compared to actual outcomes. Entropy, in this context, represents uncertainty — a coin flip has low entropy, while rolling a six-sided die has higher entropy. Cross-entropy loss compares the certainty of the model’s predictions to the certainty of the ground truth labels.

Another classification loss function is hinge loss, which is commonly used in support vector machines (SVMs). Hinge loss encourages correct predictions with confidence, aiming to maximize the margin between classes. This makes it particularly useful in binary classification tasks where distinctions between classes must be clear.

Calculating the loss function serves as a guide for improving the model. Loss values indicate how far off predictions are from actual results, enabling adjustments through optimization. The loss function acts as a feedback mechanism, directing the learning process. Lower loss indicates enhanced alignment between predictions and true outcomes. After adjusting the YouTube prediction model, new forecasts resulted in lower loss values across all three functions, with the greatest improvement in MSE, as the model reduced the large prediction error for the pour-over video.

Loss functions not only evaluate model performance but also influence model training through optimization techniques like gradient descent. Gradient descent calculates the slope of the loss function with respect to each model parameter, determining the optimal direction to minimize loss. The model updates weight and bias terms iteratively until the loss is sufficiently minimized.

In summary, a loss function serves as both a scorekeeper that measures model performance and a guide that directs learning. Thanks to loss functions, my colleague can continue tweaking his YouTube AI model to minimize loss and improve prediction accuracy.

Psychological safety isn’t about fluffy “niceness” — it is the foundation of agile teams that innovate, adapt, and deliver.

Redis is a high-performance NoSQL database that is usually used as an in-memory caching solution. However, it is very useful as the primary datastore ......

Chip designer Arm has a new edge AI platform optimized for the Internet of Things (IoT) that expands the size of AI models that can run on edge device......

Market Impact Analysis

Market Growth Trend

2018201920202021202220232024
7.5%9.0%9.4%10.5%11.0%11.4%11.5%
7.5%9.0%9.4%10.5%11.0%11.4%11.5% 2018201920202021202220232024

Quarterly Growth Rate

Q1 2024 Q2 2024 Q3 2024 Q4 2024
10.8% 11.1% 11.3% 11.5%
10.8% Q1 11.1% Q2 11.3% Q3 11.5% Q4

Market Segments and Growth Drivers

Segment Market Share Growth Rate
Enterprise Software38%10.8%
Cloud Services31%17.5%
Developer Tools14%9.3%
Security Software12%13.2%
Other Software5%7.5%
Enterprise Software38.0%Cloud Services31.0%Developer Tools14.0%Security Software12.0%Other Software5.0%

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity:

Innovation Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity AI/ML Blockchain VR/AR Cloud Mobile

Competitive Landscape Analysis

Company Market Share
Microsoft22.6%
Oracle14.8%
SAP12.5%
Salesforce9.7%
Adobe8.3%

Future Outlook and Predictions

The Ai and Qualcomm: Latest Developments landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:

Year-by-Year Technology Evolution

Based on current trajectory and expert analyses, we can project the following development timeline:

2024Early adopters begin implementing specialized solutions with measurable results
2025Industry standards emerging to facilitate broader adoption and integration
2026Mainstream adoption begins as technical barriers are addressed
2027Integration with adjacent technologies creates new capabilities
2028Business models transform as capabilities mature
2029Technology becomes embedded in core infrastructure and processes
2030New paradigms emerge as the technology reaches full maturity

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:

Time / Development Stage Adoption / Maturity Innovation Early Adoption Growth Maturity Decline/Legacy Emerging Tech Current Focus Established Tech Mature Solutions (Interactive diagram available in full report)

Innovation Trigger

  • Generative AI for specialized domains
  • Blockchain for supply chain verification

Peak of Inflated Expectations

  • Digital twins for business processes
  • Quantum-resistant cryptography

Trough of Disillusionment

  • Consumer AR/VR applications
  • General-purpose blockchain

Slope of Enlightenment

  • AI-driven analytics
  • Edge computing

Plateau of Productivity

  • Cloud infrastructure
  • Mobile applications

Technology Evolution Timeline

1-2 Years
  • Technology adoption accelerating across industries
  • digital transformation initiatives becoming mainstream
3-5 Years
  • Significant transformation of business processes through advanced technologies
  • new digital business models emerging
5+ Years
  • Fundamental shifts in how technology integrates with business and society
  • emergence of new technology paradigms

Expert Perspectives

Leading experts in the software dev sector provide diverse perspectives on how the landscape will evolve over the coming years:

"Technology transformation will continue to accelerate, creating both challenges and opportunities."

— Industry Expert

"Organizations must balance innovation with practical implementation to achieve meaningful results."

— Technology Analyst

"The most successful adopters will focus on business outcomes rather than technology for its own sake."

— Research Director

Areas of Expert Consensus

  • Acceleration of Innovation: The pace of technological evolution will continue to increase
  • Practical Integration: Focus will shift from proof-of-concept to operational deployment
  • Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
  • Regulatory Influence: Regulatory frameworks will increasingly shape technology development

Short-Term Outlook (1-2 Years)

In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing software dev challenges:

  • Technology adoption accelerating across industries
  • digital transformation initiatives becoming mainstream

These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.

Mid-Term Outlook (3-5 Years)

As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:

  • Significant transformation of business processes through advanced technologies
  • new digital business models emerging

This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.

Long-Term Outlook (5+ Years)

Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:

  • Fundamental shifts in how technology integrates with business and society
  • emergence of new technology paradigms

These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.

Key Risk Factors and Uncertainties

Several critical factors could significantly impact the trajectory of software dev evolution:

Technical debt accumulation
Security integration challenges
Maintaining code quality

Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.

Alternative Future Scenarios

The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:

Optimistic Scenario

Rapid adoption of advanced technologies with significant business impact

Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.

Probability: 25-30%

Base Case Scenario

Measured implementation with incremental improvements

Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.

Probability: 50-60%

Conservative Scenario

Technical and organizational barriers limiting effective adoption

Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.

Probability: 15-20%

Scenario Comparison Matrix

FactorOptimisticBase CaseConservative
Implementation TimelineAcceleratedSteadyDelayed
Market AdoptionWidespreadSelectiveLimited
Technology EvolutionRapidProgressiveIncremental
Regulatory EnvironmentSupportiveBalancedRestrictive
Business ImpactTransformativeSignificantModest

Transformational Impact

Technology becoming increasingly embedded in all aspects of business operations. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.

The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.

Implementation Challenges

Technical complexity and organizational readiness remain key challenges. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.

Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.

Key Innovations to Watch

Artificial intelligence, distributed systems, and automation technologies leading innovation. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.

Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.

Technical Glossary

Key technical terms and definitions to help understand the technologies discussed in this article.

Understanding the following technical concepts is essential for grasping the full implications of the technologies discussed in this article. These definitions provide context for both technical and non-technical readers.

Filter by difficulty:

API beginner

algorithm APIs serve as the connective tissue in modern software architectures, enabling different applications and services to communicate and share data according to defined protocols and data formats.
API diagram Visual explanation of API concept
Example: Cloud service providers like AWS, Google Cloud, and Azure offer extensive APIs that allow organizations to programmatically provision and manage infrastructure and services.

algorithm intermediate

interface

scalability intermediate

platform

platform intermediate

encryption Platforms provide standardized environments that reduce development complexity and enable ecosystem growth through shared functionality and integration capabilities.

agile intermediate

API

framework intermediate

cloud computing