Recent Posts

Pages: [1] 2 3 ... 10
1
Without Linux, there is no ChatGPT. No AI at all. None. Here's why.

Modern AI began with open source, and it ran on Linux. Today, Linux isn't just important for artificial intelligence; it's the foundation upon which today's entire modern AI stack runs. From hyperscale training clusters down to edge inference boxes, it's all Linux from top to bottom.

AI's magic tricks are really the aggregate output of very prosaic infrastructure: supercomputers, GPU farms, and cloud clusters that almost all run some flavor of Linux. The core machine-learning frameworks -- TensorFlow, PyTorch, scikit-learn, and friends -- were all developed and tuned first on Linux. Tooling around these tools, from Jupyter and Anaconda to Docker and Kubernetes, is similarly optimized for Linux.

Why IT jobs will live and die on Linux

Why? Because it's on Linux where researchers and production engineers actually deploy AI. Future IT jobs will live and die on Linux.

You see, AI runs on Linux because it's the most flexible, powerful, and scalable environment for the GPU‑heavy, distributed workloads modern AI requires. In addition, the entire tooling and cloud ecosystem has standardized on Linux.

Yes, every AI platform, whether it's OpenAI, Copilot, Perplexity, Anthropic, or your favorite AI chatbot, is built on Linux, plus drivers, libraries, and orchestration, all glued together in different ways. The proprietary bits may grab the branding, but without Linux, they're nowhere.

That translates into more Linux jobs.

As the Linux Foundation's 2025 State of Tech Talent Report noted, AI is driving a net increase in tech jobs, particularly Linux jobs. What this looks like comes down to "AI [is] reshaping roles rather than eliminating them," according to the report, "leading to shifts in skill demand and new opportunities for workforce growth."

Besides increasing Linux system and network administration jobs, the site Linux Careers sees "a rapidly emerging trend involving professionals who combine Linux expertise with artificial intelligence and machine learning operations." Such new AI/Linux jobs include AI Operations Specialist, MLOps Engineer, ML Engineer, and DevOps/AI Engineer.

Of course, Linux distributors know all this, which is why, when new Linux distros are released, their makers emphasize AI features.

For example, Canonical and Red Hat are racing to plant their Linux flags on Nvidia's new Vera Rubin AI supercomputer platform. The race is on to see who will own the operating system layer of "gigascale AI factories."

For its part, Red Hat is introducing Red Hat Enterprise Linux (RHEL) for Nvidia. This curated edition of RHEL is optimized specifically for Nvidia's Rubin platform, including the Vera Rubin NVL72 rack-scale systems.

The company says this variant will ship with Day 0 support for the Vera CPU, Rubin GPUs, and Nvidia's CUDA X stack, with validated OpenRM drivers and toolkits delivered directly through Red Hat repositories.

The Linux kernel and AI

Canonical is also rolling out official Ubuntu support for the Nvidia Rubin platform, also targeting the Vera Rubin NVL72. The London-headquartered company is anchoring its story around making the custom Arm-based Vera CPU a "first-class citizen," with x86 parity in its forthcoming Ubuntu 26.04 release.

So, unlike Red Hat, which has a RHEL just for Nvidia's processors, the new Ubuntu will support Nvidia. This version will also upstream features such as Nested Virtualization and ARM Memory Partitioning and Monitoring (MPAM) to better partition memory bandwidth and cache for multi-tenant AI workloads.

What runs all this is a Linux kernel that has been steadily modified to keep up with AI's voracious appetite for hardware acceleration. Modern kernels juggle GPU and specialized accelerator drivers, sophisticated memory management for moving tensors around quickly, and schedulers tuned for massively parallel batch jobs.

In short, the kernel has been rewired over the last decade to become an operating system for AI hardware accelerators.

Memory: putting data where the GPUs are

Specifically, one of the most important enablers has been Heterogeneous Memory Management. This enables device memory, such as Graphics Processing Unit/Video Random Access Memory (GPU VRAM), to be integrated into Linux's virtual memory subsystem.

That, combined with Direct Memory Access Buffering (DMA-BUF) and Non-Uniform Memory Access (NUMA) optimization, enables AI runtimes to keep tensors close to the accelerator and cut back on data copying, which tends to slow down performance.

Recent kernels also treat advanced CPU-GPU combinations, such as tightly coupled NUMA-style CPU/GPU nodes, as first-class citizens. With this, memory can be migrated between CPU-attached RAM and high-bandwidth GPU memory on demand.

This, as Nvidia explained, "enables the CPU and GPU to share a single per-process page table, enabling all CPU and GPU threads to access all system-allocated memory."

Accelerators: a real subsystem, not an add-on

Linux now has a dedicated compute accelerators subsystem that's designed to expose GPUs, Tensor Processing Units (TPUs), and custom AI application-specific integrated circuits (ASICs) to your AI and machine learning (ML) programs.

On top of that, GPU support has matured from graphics-first to compute-heavy, via the Direct Rendering Manager (DRM), open stacks like ROCm and OpenCL, and Nvidia's Compute Unified Device Architecture (CUDA) drivers.

Kernel work has expanded to cover newer AI accelerators such as Intel's Habana Gaudi, Google's Edge TPU, and FPGA/ASIC boards, with drivers and bus abstractions. This enables AI programs such as PyTorch or TensorFlow to see and use them as just another device. Thus, anyone making new AI silicon today rightly assumes that Linux will be running on it.

Scheduling: feeding hungry accelerators

Linux's default scheduler, the Earliest Eligible Virtual Deadline First (EEVDF), real-time scheduler, and NUMA balancing have all been tuned to enable AI workloads to pin CPUs, isolate noisy neighbors, and feed accelerators without jitter. Work on raising the default kernel timer frequency from 250 Hz to 1000 Hz is already showing measurable boosts in Large Language Model (LLM) acceleration with negligible power cost.

While not a Linux default setting, some distros, like the Ubuntu low-latency kernels, now come with this as a standard setting.

Direct paths: cutting out the CPU middleman

Modern kernels allow GPUs to access memory, storage, and even peer devices directly, using technologies such as Nvidia's GPUDirect and peer-to-peer DMA. Combined with Compute Express Link (CXL) and improved Input/Output Memory Management Unit (IOMMU) handling, it enables accelerators to bypass the CPU when moving data. This eliminates bottlenecks that previously stalled ML training runs. This invisible plumbing is why AI clusters can scale out without collapsing under their own I/O.

What all this adds up to is that, when executives talk about "AI strategy," what they're not saying is that the unglamorous reality is that AI strategy depends on managing Linux at scale. It's all about patching kernels, hardening containers, and securing opaque workloads. AI may get the headlines, but Linux remains the operating system doing the actual work.

source
2


Well friends and true believers, the day is finally here. A new Windows phone has arrived. Thanks to Nex Computing and its NexPhone, a new smartphone is coming to market that dual boots Windows and Android, and it can even run desktop Linux when it's docked.

We are so back.

It runs full Windows on Arm, of course, since there's no actual mobile version of Windows 11. However, Nex designed a full skin to make it look like the Windows Phone UI that we all remember and love. Not only is it fit with transparent tiles, but it includes familiar actions like swiping left for an all apps list.

A phone that's designed to be used as a PC
It's meant to be your only computer





If you're an old Windows Phone fan like I am, you probably remember Nex Computing, or more specifically, the NexDock. It was a laptop-style device that you could plug a phone into and turn it into a PC, thanks to Windows 10 Mobile's Continuum feature.

In its announcement, founder and CEO Emre Kosmaz talks about how this was always the dream, one device that can be your only computer. You could probably say it's not even a phone, and more of an ultramobile PC that happens to include telephony.

He shared a concept video from 2012.


In a small meeting room in the Las Vegas Convention Center at CES 2026, or PC Hardware Segment Lead Rich Pinnock-Edmonds and I got to meet with Kosmaz, and he demoed the NexPhone for us. There's nothing on the market like it.

In fact, there isn't actually supposed to be anything on the market like it. Under the hood, there's a Qualcomm Dragonwing QCM6490 (a modified Snapdragon 778G), rather than a Snapdragon X2 Elite or something more mobile-friendly. The reason is because it's the only processor that supports Windows, Android, and Linux. It's not even meant for full desktop Windows



As you'd expect, it does what it says. When connected to a dock, you get a desktop environment. It can be Windows or Android, which you'll have to boot into, and you can launch Debian Linux from Android.

On mobile, Windows doesn't shine quite as much as it once did. While the mobile UI is fantastic, there isn't much that can be done for the fact that Windows apps simply do not have the responsive design to adapt to smaller screens like they once did under UWP. So, while the UI is familiar, launching an app on the phone isn't too pleasant.

The NexPhone is coming in Q3
And it's relatively inexpensive



I know what you're thinking. When will I be able to buy the first phone running Windows in nearly a decade? Sadly, you'll have to wait a while longer, as it's coming in Q3.

It'll set you back $549, which feels inexpensive given that Nex has developed a truly unique product with the NexPhone. You can reserve it starting today for $199 (refundable), with the other $350 due when it ships.

Other specs include a 6.58-inch 1080p LCD, 12GB RAM, 250GB storage with microSD expansion, a 5,000mAh battery, and dual rear cameras with a 64MP main sensor and a 13MP ultra-wide sensor



source
3
Windows on ARM / Nvidia prepares multiple Arm-based chips for 2026 and 2027
« Last post by javajolt on January 21, 2026, 02:13:16 AM »
First-gen could debut with Windows 11 26H1



Nvidia hasn’t given up on its plans to ship a Windows on Arm chip this year. Supply-chain sources claim NVIDIA is still planning a Windows on Arm chip, with N1X-based laptops expected as early as Q1 2026. It will likely come with Windows 11 26H1 out of the box, as Windows Latest previously reported that this particular OS release is for new Silicon.



A Digitimes (Chinese) report based on supply-chain chatter says NVIDIA is trying to expand beyond GPUs and push deeper into PCs, especially Windows on Arm (WoA) laptops. The same roadmap also talks about newer chips after N1/N1X, moving toward N2 and N2X in 2027.

NVIDIA has not confirmed these timelines, and it usually avoids commenting on supply-chain leaks.

What are N1 and N1X Nvidia “AI PC” chips?

While N1 is likely for desktops, N1X is for notebooks, and Nvidia reportedly plans to announce N1X-based laptops running Windows 11 26H1 in Q1 (by the end of March 2026). Initially, Nvidia is focusing on consumer models, but there are plans for other variants, and we could see those PCs in Q2 2026.



N1X isn’t just a rumor, because NVIDIA is already using it in DGX Spark. The report says DGX Spark is based on N1X and includes the GB10 “superchip” with 128GB unified memory. Supply-chain sources also claim the same N1X platform will appear in Windows on Arm laptops as early as Q1 2026, including consumer models.

Multiple OEMs (Acer, Asus, Dell, Gigabyte, HP, Lenovo, MSI) are reportedly building their own DGX Spark systems.

According to Digitimes, Nvidia’s N1X notebooks were supposed to debut in late 2025, but they were pushed to 2026 due to Microsoft OS timing. It appears that Digitimes is referring to recent platform changes, which will begin shipping with Windows 11 26H1 in the coming months.



For those unaware, Microsoft officially confirmed that Windows 11 26H1 is for new Silicon, but it never said whether it’s only for Snapdragon X2 PCs.

“26H1 is not a feature update for version 25H2 and only includes platform changes to support specific silicon,” Microsoft noted in a blog post published in November 2025. 26H1 does not have exclusive features, but it’s based on a new platform release, which means it could include N1X and Snapdragon X2-related optimizations.

The report also claims that N1X chips were delayed due to weaker or uncertain notebook demand, Memory supply and pricing problems, which matter a lot for unified-memory designs.

Either way, it looks almost certain we’ll see Nvidia’s first Arm-based laptops in 2026.

More importantly, Nvidia is also working on N2 and N2X, which would be the next generation, with products launching starting around Q3 2027.

source
4
Almost exactly a year ago, Microsoft shared details regarding the hardening process of Domain Controllers (DCs) to protect them against a couple of security flaws in Kerberos. Now, it is kicking off yet another hardening phase to patch DCs against security issues recently reported via CVE-2026-20833.

Basically, there is a vulnerability in the Kerberos authentication protocol that allows an attacker to exploit weak and legacy encryption algorithms like RC4 and procure service tickets that enable them to steal credentials for service accounts. This exploit is tagged as CVE-2026-20833, and applies to DCs running the following SKUs of Windows Server:

   • Windows Server 2008 Premium Assurance

   • Windows Server 2008 R2 Premium Assurance

   • Windows Server 2012 ESU

   • Windows Server 2012 R2 ESU

   • Windows Server 2016

   • Windows Server 2019

   • Windows Server 2022

   • Windows Server 2025

To mitigate this issue, Microsoft has rolled out a few changes via the recent Patch Tuesday update. Right now, customers are in the "Initial Deployment Phase," during which the Redmond tech giant has released Windows updates that provide audit events for customers who might face compatibility issues due to the hardening process. It has also introduced an  RC4DefaultDisablementPhase registry value to proactively enable DCs to use the AES-SHA1 algorithm when it is safe to do so.

This phase will continue until April 2026, at which point we'll enter the "Second Deployment Phase" that empowers DCs to utilize AES-SHA1 for accounts that do not have an explicit msds-SupportedEncryptionTypes active directory attribute defined.

Finally, in July 2026, Microsoft will begin the "Enforcement Phase" that gets rid of the RC4DefaultDisablementPhase registry subkey.

In its dedicated support article, Microsoft has encouraged IT admins to apply January 2026's Patch Tuesday updates and begin actively monitoring audit events to see if they are ready to kick off the next phase of DC hardening. You can find out more details here.

source
5
If you are a long-time Neowin reader, chances are you may be amongst those Windows users who had noticed a curious behavior: holding the Shift key while restarting doesn’t trigger a full cold reboot; instead, the system would do something slightly different.

For those not familiar, when a user held down the Shift key while restarting Windows 95, the system behaved differently than during a full cold reboot. Instead of cycling the hardware completely, Windows displayed “Windows is restarting” and attempted what was essentially a fast-restart. In a way, this was kind of like Fast Startup, which Microsoft introduced much later in Windows 8. If you attempt a Shift + Restart on Windows 11 and 10 you get into Windows Recovery Environment (WinRE).

Veteran Microsoft Windows developer Raymond Chen has explained how this worked. Chen, in his newly penned article on his The Old New Thing column, notes that this behavior was part of the old 16‑bit ExitWindows function when it received the EW_RESTARTWINDOWS flag.

If you are wondering, the ExitWindows function is a legacy function used to log off the interactive Windows user, while the EW_RESTARTWINDOWS parameter, as the name suggests, is used to restart the system.

Chen has explained that the shut down sequence began first with the 16‑bit Windows kernel itself, followed by the 32‑bit virtual memory manager, and then the CPU dropping back into real mode.

After this, control returned to the bootstrap program "win.com" with a special signal “Can you start protected mode Windows again for me?” thus instructing it to relaunch protected‑mode Windows. Hence, the code in win.com would then display the “Please wait while Windows restarts…” message as it tried to get the system back up as requested.

If you are trying to make sense of it, Win.com was essentially the executable file used to load different Windows versions based on DOS, like Windows 95. Meanwhile, Real mode Windows is an early design meant to run on PCs with minimal resources, like 192 KB of RAM and floppy drives, and Protected mode Windows is like the full OS version, with memory protection, GUI, and all.

Chen notes that by its nature of design, .com files claimed all conventional memory at launch, but in the case of win.com, it would release unused space to create one large contiguous block for protected‑mode Windows. So if another program had fragmented that memory space, the fast restart could not succeed, and win.com fell back to a full reboot. Otherwise, the fast restart continued as it re‑created the virtual machine manager and launched the graphical user interface (GUI), giving the user the impression of a seamless fast restart.

However, the process was not flawless as Raymond Chen adds, since some users reported that attempting two fast restarts in succession would lead to crashes, while others seemingly managed multiple fast restarts without issue. The likely explanation was that certain device drivers failed to reset properly, leaving corrupted memory that only revealed itself during shutdown.

source
6


While everyone obsesses over Intel, AMD, and Apple's M-series chips, something extraordinary just happened that most people completely missed: Huawei launched the world's first HarmonyOS-powered PC in May 2025, and at its core lies a processor you've probably never heard of—the Kunpeng 920.

This isn't just another laptop launch. This is Huawei's declaration of complete technological independence from the West, and it's rewriting the rules of global computing.

The Background:

2019: US government banned Huawei from Google services and critical chip technologies. Industry analysts predicted Huawei's collapse.

Instead of dying, Huawei used sanctions as rocket fuel for the most aggressive innovation push in tech history.

The Kunpeng 920 Chip:

Launched: January 2019 by Huawei's HiSilicon division

Original Purpose: 64-core ARM-based server CPU for data centers/cloud computing

Architecture: ARMv8 (not traditional x86 like Intel/AMD)

Process: 7nm technology (manufactured by TSMC before sanctions)

Performance: 930+ SPECint benchmark score (25% higher than industry standards)

Efficiency: 30% less power consumption than competitors

The Brilliant Strategy:

When Huawei lost access to Intel/AMD chips for consumer products, they didn't start from scratch. They repurposed their powerful server processor for personal computing. Like using a freight train engine for high-speed passenger rail—unconventional but effective.

Specifications:

✅ Up to 64 ARM cores

✅ 8-channel DDR4-2933 memory

✅ PCIe Gen4

✅ Integrated networking capabilities

Early Adoption (2020):

8-core Kunpeng 920 systems on Huawei D920S10 motherboards

Running UOS (Chinese Linux distribution)

Cost: ~7,500 yuan ($1,068)

Target: Government/educational institutions prioritizing technological sovereignty

Focus: Document editing, web browsing, enterprise productivity (not gaming)

source
7
I open Gmail dozens of times a day. It was where messages landed, and occasionally got starred or archived. I never expected it to be useful beyond email.

However, tucked away in the sidebar is a note-taking app I’d overlooked, even though I use Gmail every single day.

After I started using Google Keep directly inside Gmail, it changed how I handle notes, ideas, and tasks during the workday.

Instead of context-switching or letting thoughts slip away, I now capture them alongside my messages.

The Gmail sidebar I kept ignoring

If you use Gmail on desktop, you’ve probably noticed the slim sidebar on the right. It holds Google Calendar, Google Tasks, Google Keep, and Contacts.

I’d always dismissed it as clutter, thinking of it as something Google added to push its ecosystem.

Out of curiosity (and mild frustration with my inbox), I clicked the Keep icon one day. A familiar panel slid open, showing my notes exactly as they appear on my phone.

By keeping tools like Google Keep just a click away, Gmail offered a lightweight way to capture notes without switching tabs.

How I use Google Keep alongside Gmail

Using Google Keep in Gmail serves as a holding space for my thoughts and ideas while I manage my inbox.

If an email triggers an idea, a follow-up question, or something I want to think about later, I jot it down in Keep instead of leaving the message unread or starred.

That keeps my inbox focused on communication.

I also use Keep to extract context out of long email threads.

I’ll copy key details, such as dates, decisions, or action items, into a note so I don’t have to re-scan the entire conversation later.

It is beneficial for ongoing projects, where information often gets buried across multiple replies.

One limitation is that you can’t directly move an email into Google Keep. There’s no “send to Keep” button, which initially felt like a missed opportunity.

My workaround is simple: I open the email, copy its URL, and paste that link into a Keep note. That way, I can jump straight back to the original message whenever I need context.

This approach works well for ongoing threads. I’ll summarize the key points of the email in my own words, drop the Gmail link underneath, and move on.

It keeps my notes clean while still giving me a direct path back to the entire conversation.

Why this works better than copying emails elsewhere

Before I started using Keep inside Gmail, my default move was to copy email content into another app, whether it’s a note-taking app, a task manager, or a document.

However, every extra step made it less likely I’d capture the information in the first place.

Using Keep directly in Gmail is convenient. I don’t have to decide where something belongs or interrupt my flow by switching apps.

If a thought comes up while I’m reading an email, I capture it immediately.

There’s also less duplication and cleanup involved.

When I copy emails into other tools, I tend to over-save by copying entire threads or unnecessary details. With Keep, I’m more selective.

That makes the notes easier to scan and far more useful later.

Throughout the day, Keep becomes a lightweight scratchpad.

Drafting a reply, outlining a meeting agenda, or capturing something I want to revisit later all happen there, without ever leaving Gmail.

And because it syncs automatically, those notes are automatically updated on the mobile app.

The limitations of using Google Keep inside Gmail

As convenient as this setup is, it isn’t perfect.

Google Keep in Gmail is intentionally lightweight, and that simplicity means it won’t suit every workflow or every type of note.

The most significant limitation is structure. Keep doesn’t offer folders, nested notes, or advanced organization tools.

When your notes start piling up, relying on labels and search can feel a bit limiting compared to other note-taking apps.

It’s great for quick capture, but it’s not designed for managing large, long-term projects.

There’s also the lack of deeper email integration.

Since you can’t directly attach or move an email into Keep, referencing messages requires manual steps, such as copying links or summarizing content yourself.

Finally, the sidebar itself can feel cramped.

Writing longer notes or thinking through complex ideas in a narrow panel isn’t always comfortable.

When a note starts growing beyond a few lines, I often open Keep in a new tab.

Despite these limitations, I’ve stuck with this system because I’m not asking it to do too much.

Google Keep in Gmail doesn’t replace a full note-taking app; it’s there to catch thoughts before they disappear.

The Gmail feature that earned a permanent spot in my workflow

What surprised me most about using Google Keep inside Gmail is how little effort it takes. There’s no setup process or system to maintain.

By keeping note-taking right beside my inbox, I no longer have to trust my memory or let emails pile up as reminders.

It’s worth noting that this setup won’t replace a dedicated note-taking app. For me, its value lies in how seamlessly it fits into the day.

What once felt like a useless sidebar has transformed into one of the most consistently handy parts of Gmail, and now, I can’t imagine working without it.



source
8
The latest Google Messages app includes changes to the camera viewfinder and photo gallery.



I believe that somewhere in Mountain View, California, where Google has its corporate headquarters, there is a huge room where software developers go through the user interfaces of several apps and decide what changes they could make to improve the look of the app, or to add a new feature. Some apps, like Google Messages, Phone by Google, Google Maps, and Google Photos are the subject of these meetings more often than others.

Google doesn't do this, but you might imagine that they do

To make sure that these apps are updated frequently, Google locks the developers inside this room without air conditioning, water, or a toilet. They are given nothing to eat or drink until the developers have come to an agreement on a UI change for the day's special app. No, I really don't think that Google does this, but it would explain some of the small, inconsequential changes that Google makes to an app with a software update. And this article shows you an example of this.

In all seriousness, many of these updates allowed Google to bring the design of these apps up to speed with the Material 3 Expressive design that includes spring-based motion that adds bounce and stretch to swipe-based elements including volume sliders. Buttons and icons change shape when you press on them, and variable fonts might change their looks to capture your attention and try to get you to press a button to read an unread message (it's kind of silly to get you to read a message already read) or take a certain action.


The Google Messages app after the latest update. | Image credit-PhoneArena

The Google Messages app has received an update to its UI and it does result in a change to the camera and photo gallery. Open the Google Messages app and tap on a conversation. Press the "+" button at the left of the text bar on the bottom of the screen. Then tap on Camera (or Gallery if you want to skip the camera and go right to your image library). The updated version of Google Messages will feature a slightly smaller viewfinder that is actually a container with rounded corners at the top and bottom.

How to tell if you have the updated version of Google Maps

You can tell if you have the new version of Google Messages because if you have yet to receive the update, in the camera mode you'll see 1.5 horizontal rows from your gallery. With the shorter viewfinder in the updated version, that increases to 2 rows. If you want to run through your entire gallery, swipe up on the gallery sheet. The updated version was first spotted in a beta version of the Google Messages app and is now available in stable version of Google Messages.

How to force the update to hit your Google Messages app

You might need to Force Stop the app to trigger the update. To do that, go to Settings > Apps > See all xxx apps. Scroll down to Messages and tap on it. That will take you to the Messages App Info page. Press the "Force Stop" button and close the app. When you reopen it, the update should be there. It worked for me. If you don't have the Google Messages app installed on your Android phone, you can get it from the Google Play Store by tapping on this link.

So, what is the advantage to users of this redesign? Well, it will allow you to see a tad more of your photo gallery without having to swipe up on the gallery sheet. The rounded corners of the smaller viewfinder look pretty cool and for me, that was enough to make it worthwhile to update the app. Truthfully, I can't tell you that these changes should be enough to make you feel compelled to update the Google Messages app. That will be up to you, although I would still recommend doing it.

source
9


FlyOOBE has received another feature update to give you more control over various parts of Windows 11. This tool is among the most popular third-party utilities for Windows debloating, and as the dev says in the latest release notes, it recently reached nearly 2.5 million downloads on GitHub, showing how much people want to clean Windows 11 from unnecessary stuff. The newest release, version 2.4, makes FlyOOBE even better by improving its capabilities to detect and remove AI components (the so-called Slopilot).

The release notes for FlyOOBE 2.4 state that the main goal of the update is to give users a choice. AI is not necessarily bad, but users should have the option to turn it off. If you do not want any of it on your PC, FlyOOBE is now better at detecting AI features and offers a deep cleaning feature via the RemoveWindowsAI script, which we reported earlier.

Quote
“We need to get beyond the arguments of slop vs sophistication…” - Satya Nadella
Agreed, but users still deserve a choice!!! The AI OOBE control has been refined and is now officially called Slopilot. Slopilot isnt anti-AI - it's pro user choice

In addition to that, there are several improvements for browser detection, extension engine, themes, and more. Here is the full changelog:

   • This update improves detection of AI-related (Slopilot) features across Windows 11 and adds optional deep cleanup capabilities via external tooling such as RemoveWindowsAI.

   • Users can now better understand, review, and disable AI components they don't want, transparently and on their own terms.

   • Improved browser detection in the Browser OOBE.

   • Updated Extensions engine for better coverage and accuracy.

   • Global search functionality expanded and refined across all sections.

   • Improved theme detection in the Personalization OOBE.

   • Minor core optimizations and internal cleanups.

You can download FlyOOBE 2.4 from its official GitHub repository, which is the only right place to get the app. You can find the link to it on Neowin's Software page as well.

source
10
Microsoft / Microsoft confirms it’s killing offline phone-based activation
« Last post by javajolt on January 12, 2026, 10:06:45 AM »


In a statement to Windows Latest, Microsoft confirmed that it retired the traditional Windows “telephone-based” activation method, which truly worked offline. With a telephone-based approach, you could call Microsoft’s activation phone number, and the automated process would activate your Windows or Office license. Now, this method has been retired.

I’m told the company retired the Windows or Office phone-based activation method as part of the efforts to modernize the ‘activation experience.’ If you want to activate Windows 11 or Office today, the easiest way is to link your license to a Microsoft account, which will automatically verify the product.



You can find out how your Windows is activated from Settings > System > Activation, and then tap on ‘Activation state.’ In my case and in most cases, it’s activated using a digital license linked to a Microsoft account.”



Phone-based activation no longer works

Microsoft retired the phone-based activation method for Windows and Office products on December 3, 2025. When you call one of the listed telephone numbers for offline activation, you will be asked to use a Microsoft account instead. But that does not mean MSA is the only way to activate Windows.

For perpetual licenses, Microsoft tells me that advanced customers can use the Product Activation Portal.



Unlike phone-based activation, which did not require an internet connection at any point of the process, this new online Product Activation Portal wants you to log into a portal, which is “secure, reliable, and user-friendly,” according to Microsoft. Once you’re logged in and have entered the details, you can still activate Windows offline.

“Customers who rely on traditional offline activation can continue using it without changes to their environment,” Microsoft argues. The company says the process has changed, and phone call-based activation is no longer supported, but that does not mean you cannot activate Windows offline.

“While the process has been updated, offline activation capabilities remain supported,” the company said.

What do you need to activate Windows using the new Product Activation Portal flow, after “activate by telephone” has been retired?

First, you need to reach the “Activate by Telephone” screen inside Windows (or the product you’re activating). You’ll be shown activation information and a phone number, but instead of calling, you can note down the info from that screen and use it in the Product Activation Portal.

However, unlike phone-based activation, the Product Activation Portal requires a browser and internet connection, but the target PC can be offline. Now, the portal cannot be used unless you’ve a Microsoft account.



The portal specifically supports a Personal Microsoft account (MSA), a work or school account, a Microsoft Entra ID account, or an Azure Government tenant account.

Did Microsoft retire “activate by telephone” to push MSA?

Some of you might argue that Microsoft retired phone-based activation because it allowed customers to avoid creating a Microsoft account. While it’s possible, I’m going to play devil’s advocate here.

I believe Microsoft retired the telephone-based approach due to low usage. Moreover, Product Activation Portal is a better option because it covers all products, not just Windows and Office, which were the two supported products of phone-based activation.

source
Pages: [1] 2 3 ... 10