AI is confusing and has a lot of new terms associated. What is "Copilot"? Why are there so many different kinds of "Copilot"? How does this effect what technology you should purchase? This page provides short summaries of these terms to help you navigate this new AI landscape.
What is a "Copilot+ PC"?
Copilot+ PCs are a new category of Windows computers designed by Microsoft, announced in 2024, that focus heavily on AI-powered features and on-device processing capabilities. They are built with advanced hardware, including NPUs (Neural Processing Units), to handle AI tasks more efficiently than traditional CPUs. In order for Microsoft to certify a device as a Copilot+ PC, the device's neural processing unit (NPU) must be able to perform at minimum 40 trillion operations per second (TOPS).
ARM AND x86 Architecture
ARM
ARM processors are known for their low power consumption, making them ideal for mobile devices, tablets, and embedded systems.
Simplicity: ARM uses a RISC (Reduced Instruction Set Computing) design, which means it has a smaller set of simpler instructions. This often leads to faster performance per watt which can result in better power efficiency. ARM licenses its architecture, allowing companies to customize and optimize chips for specific use cases (e.g., Apple’s M-series chips). Buyers should consider carefully if the software they use is compatible with ARM, especially in the case of Windows 11 on ARM, as the included translation layer to run x86 applications may perform inconsistently.
X86
x86 processors are generally more powerful for high-performance computing tasks, making them the standard in desktops, gaming PCs, and workstations. Decades of software development have created broad compatibility with operating systems and applications, especially in the Windows and enterprise computing ecosystem. This architecture is often considered less efficient than ARM, but it is not well understood how these two architectures will compete in this regard in the coming years.
Key Terms
NPU
An NPU (neural processing unit) is a specialized processing unit designed to accelerate machine learning (ML) and artificial intelligence (AI) tasks. Unlike traditional CPU or GPU cores, NPUs are optimized for parallel processing and low-power matrix operations—common in AI workloads like image recognition or natural language processing. When used in modern laptops or desktops, they are often leveraged to off load well-suited locally run tasks so the CPU and other components can perform other work.
TOPS
A metric used to measure the performance of NPUs, GPUs, or AI accelerators. It is a quantification of the number of trillion operations (like matrix multiplications) a processor can perform in one second.
LLM
A Large Language Model is a type of advanced artificial intelligence trained on massive amounts of text data to understand, generate, and respond to human language in a natural way. LLMs are designed to perform tasks such as answering questions, writing content, summarizing information, generating code, and even engaging in conversation.
Cloud LLM
A cloud LLM (Large Language Model) is an AI model hosted on remote servers, typically provided by companies like OpenAI, Microsoft, or Google. Users access it over the internet, leveraging the provider’s powerful infrastructure to handle complex language tasks. Cloud LLMs offer cutting-edge capabilities and scalability but require an online connection and may raise privacy concerns as data is processed offsite.
Local LLM
A local LLM is an AI model that runs directly on a user’s personal computer or device without requiring an internet connection. It processes data locally, offering improved privacy and offline access. However, performance is often limited by the user’s hardware, and local models may not match the size or sophistication of cloud-based alternatives.
Important notes
Not all Copilot+ PCs are ARM based and not all computers with NPUs are certified as Copilot+ PCs.
TechHub will appropriately indicate the relevant AI features and functionality of our products.