Flash-accelerated AI memory delivers an industry-first large-model training and boosted inference to integrated GPUs
LAS VEGAS--(BUSINESS WIRE)--#AI--CES 2026--Phison Electronics (8299TT), a global leader in NAND flash controllers and storage solutions, today announced expanded capabilities for its aiDAPTIV+ technology that extends powerful AI processing to integrated GPU architectures. Built on Phison’s 25 years of flash memory expertise, the expanded aiDAPTIV+ architecture now accelerates inference, significantly increases memory capacity and simplifies deployment to unlock large-model AI capabilities on notebook PCs, desktop PCs and mini-PCs.




As organizations confront unprecedented data volumes and increasingly complex AI training and inference workloads, demand is rising for solutions that are both accessible and affordable on everyday devices. aiDAPTIV+ addresses these and market memory shortage challenges by utilizing NAND flash as memory to remove compute bottlenecks, enabling on-premises inferencing and fine-tuning of large models on ubiquitous platforms.
Today’s announcement showcases the expanding innovations between Phison and its strategic partners, including unlocking larger LLM training with Acer laptops using significantly less DRAM resources. This enables users to run AI workloads on smaller platforms with the required data privacy, scalability, affordability and ease of use. For OEMs, resellers and system integrators, this technology also supports end-to-end solutions that overcome traditional GPU VRAM limitations.
“As AI models grow into tens and hundreds of billions of parameters, the industry keeps hitting the same wall with GPU memory limitations,” said Michael Wu, President and GM, Phison US. “By expanding GPU memory with high-capacity, flash-based architecture in aiDAPTIV+, we offer everyone, from consumers and SMBs to large enterprises, the ability to train and run large-scale models on affordable hardware. In effect, we are turning everyday devices into supercomputers.”
“Our engineering collaboration enables Phison’s aiDAPTIV+ technology to accommodate and accelerate large models such as gpt-oss-120b on an Acer laptop with just 32GB of memory,” said Mark Yang, AVP, Compute Software Technology at Acer. “This can significantly enhance the user experience interacting with on-device Agentic AI, for actions ranging from simple search to intelligent inquiries that support productivity and creativity.”
aiDAPTIV+ technology and partner solutions will be showcased at the Phison Bellagio Suite and partner booths during CES from January 6-8, 2026, including support for:
Reduced TCO and Memory Consumption
For Mixture of Experts (MoE) inference processing, aiDAPTIV+ offloads the memory demands from DRAM over to cost-effective flash-based cache memory. In Phison testing, a 120B parameter can now be handled with 32GB of DRAM in contrast to the 96GB required in traditional approaches.1 This expands the ability to do MoE processing to a broader range of platforms.
Faster Inference Performance
By storing tokens that no longer fit in the KV cache during inference, aiDAPTIV+ makes those tokens reusable for future prompts instead of recalculating them. Based on Phison testing, this accelerates inference response times by ten times and lowers power consumption. Early aiDAPTIV+ inference testing on notebook PCs shows substantial responsiveness gains, delivering noticeable improvement on Time to First Token (TTFT). These results demonstrate the significant inference acceleration achievable on notebook platforms.
Bigger Data on Smaller Devices
Combining Phison’s aiDAPTIV+ and new Intel Core Ultra Series 3 processors with built-in Intel Arc GPUs enables larger LLMs to be trained directly on notebook PCs addressing industry demand for high-performance AI workflows utilizing iGPUs. Phison’s lab testing shows that a notebook equipped with this technology can fine-tune a 70B-parameter model. A model of that size previously required the use of engineering workstations or data-center servers costing up to ten times more. Now students, developers, researchers and organizations can access far greater AI capabilities on familiar notebook platforms at a lower cost.
At CES 2026, Phison is showcasing demonstrations of partner notebooks, desktops, mini-PCs and personal AI supercomputers running integrated processors with aiDAPTIV+, including Acer, Corsair, MSI and NVIDIA. Phison partner MSI will showcase both an AI notebook and desktop PC in their booth utilizing aiDAPTIV+ to accelerate the inference performance on an online application built to summarize meeting notes. Additional partners, ASUS and Emdoor, will be demonstrating notebook and desktop computers leveraging aiDAPTIV+ in their booths.
To request a CES 2026 meeting with Phison to receive an aiDAPTIV+ demo, please reach out to sales@phison.com. To explore Phison’s full portfolio, visit www.phison.com.
For more information about the announcement and solutions, visit https://www.phison.com/en/media-kits/CES-2026.
About Phison Electronics
Phison Electronics is a global leader in NAND flash controllers and storage solutions, powering more than one in every five SSDs shipped worldwide. Phison has grown into a multi-billion-dollar company with over 4,500 employees—70% of which are dedicated to R&D—and more than 2,000 patents. The company’s innovations include aiDAPTIV+, an award-winning AI solution for affordable LLM training and inferencing on-premises, and Pascari, a portfolio of ultra-high-performance enterprise SSDs purpose-built for data-intensive workloads across AI, cloud, and hyperscale data centers.
Phison, the Phison design, the Phison logo are registered trademarks or trademarks of Phison Electronics or its affiliates in the US and/or other countries. All other marks are the property of their respective owners. Product specifications subject to change without notice. Pictures shown may vary from actual products.
Disclaimer: Many of the products and features mentioned are still in development and will be made available as they are finalized. The timeline for their release is dependent on the ongoing development and market and is subject to change.
©2025 Phison Electronics or its affiliates. All rights reserved.
Intel, the Intel logo and other Intel marks are the property of Intel Corporation or its subsidiaries.
1 Derived from Phison testing utilizing a Mixture-of-Experts (MoE) model – GPT-OSS-120B. Performance and outcomes are inherently dependent on the specific underlying architecture and parameterization of the model and using. Different MoE model may yield varying results. |
Contacts
PHISON Spokesperson
Antonio Yu
TEL: +886-37-586-896 #10019
Mobile: +886-979-105-026
Email: antonioyu@phison.com
PHISON Deputy Spokesperson
Kuo-Ting Lu
TEL: +886-37-586-896 #26022
Mobile: +886-979-075-330
Email: kuoting_lu@phison.com
PHISON and PASCARI enterprise product inquiries:
sales@phison.com
sales@phisonenterprise.com
PHISON and PASCARI enterprise media inquiries:
Lynn Kelly
Lynn_kelly@phison.com
press_americas@phison.com





