From hatchbacks to Hiluxes, PolicyStreet has over RM100,000 up for grabs with every car insurance renewal made on its platform, with no minimum purchase necessary.
KUALA LUMPUR, Malaysia, July 7, 2025 /PRNewswire/ -- If you're renewing your car insurance anywhere else, you might be leaving money on the table. This July, PolicyStreet is giving out over RM100,000 as part of its nationwide Cash Kembali campaign, rewarding every customer who renews their car insurance through its platform with RM100 in Touch 'n Go (TnG) eWallet reloads.
Running from 7 July 2025 until further notice, Cash Kembali is open to all Malaysians, regardless of car make or model, with no minimum purchase required. The RM100 cashback will be sent via email as a TnG reload PIN within 30 days of purchase, just for doing something you were already going to do.
"It's easy to forget that car insurance is one of the biggest yearly expenses for most Malaysians. We wanted to flip that expectation and turn a yearly expense into an instant reward. You get protected and get paid," said Yen Ming Lee, Co-founder and Group Chief Executive Officer of PolicyStreet.
The RM100 cashback applies per vehicle and is not limited to the individual. For Malaysians with multiple vehicles, you'll receive RM100 for each eligible renewal through PolicyStreet. If you're a parent with two cars in your name, renewing the car insurance for both through PolicyStreet means you could receive a total of RM200 in cashback.
To claim your share, simply visit car.policystreet.com and request a quote. In just a few clicks, you'll receive personalised quotes from PolicyStreet's top recommended insurers, selected for their value and coverage. Add-ons like road tax renewal, windshield protection, and more can be included before you check out online; no paperwork, no phone calls, no queues.
Want To Save Even More?
Channel your inner extreme couponer and stack extra promo codes to stretch your Ringgit further. Use PSBelanja10 to get RM10 off your first car insurance renewal, or PAYDAY35 to snag RM35 off between the 25th and 30th of every month; all on top of the RM100 cashback.
Prefer not to pay everything upfront? Spread your premium over up to 12 months with Easy Payment Plans from nine participating banks, including Maybank, Public Bank, and CIMB Bank. Debit card holders can also split payments via Buy Now Pay Later options like Atome, SPayLater, or PayLater by Grab, with platform discounts available at checkout.
The Cash Kembali campaign is just the beginning. More ways to save are set to roll out soon, so keep an eye out for even more rewards, better value, and simplified insurance with PolicyStreet. Full campaign details and terms & conditions are available at https://policystreet.com.my/tnc
About PolicyStreet
PolicyStreet is a regional full-stack insurance technology (insurtech) group of companies providing cutting-edge digital insurance solutions to businesses and consumers in Southeast Asia and Australia.
PolicyStreet works directly with over 40 life, general insurers and Takaful providers globally to offer a comprehensive range of products and services, which includes but is not limited to embedded insurance, customised employee benefits, financial advisory and aggregation of insurance, as well as the development of digital solutions to make insurance purposeful and simple for businesses and consumers.
As a licensed Reinsurer, General Insurer and Takaful Operator by the Labuan Financial Services Authority (LFSA), an approved Financial Adviser and Islamic Financial Adviser by Bank Negara Malaysia (BNM), and a licensee of the Australian Financial Services License by the Australian Securities and Investments Commission (ASIC), PolicyStreet is able to underwrite, customise policies, and provide unbiased advice to its clients and partners worldwide.
PolicyStreet is backed by the Malaysian sovereign wealth fund, Khazanah Nasional Berhad, and serves over 5 million customers with over US$ 10 billion in sum insured. In 2024, PolicyStreet was recognised as "Fintech of the Year" at The Asset's Triple A Digital Awards and was the winner of the Fintech Excellence Award for Financial Inclusion at the Singapore Fintech Festival, endorsed by the Monetary Authority of Singapore (MAS). The company was also ranked as the second-highest Malaysian company in the "High-Growth Companies in Asia Pacific 2024" list by Statista and The Financial Times.
** The press release content is from PR Newswire. Bastille Post is not involved in its creation. **
PolicyStreet Offers RM100 Touch 'n Go eWallet Credit for Every Car Insurance Renewal
|
- Supermicro's NVIDIA Vera Rubin NVL72 and HGX Rubin NVL8 systems are built on the DCBBS liquid-cooling stack, targeting up to 10x throughput per watt and one-tenth the token cost, compared to NVIDIA Blackwell solutions.
- Supermicro's 2U HGX Rubin NVL8 system is the most flexible platform supporting NVIDIA Vera and next-generation x86 CPUs, scaling to 72 Rubin GPUs per rack, as well as a DCBBS Liquid-to-Air (L2A) Sidecar CDU option for data centers without liquid-cooling.
- Supermicro's new NVIDIA Vera CPU systems include a 2U server supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs and a new AI storage system for context memory extension integrated with NVIDIA BlueField-4 DPU.
SAN JOSE, Calif., March 17, 2026 /PRNewswire/ -- Super Micro Computer, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud Computing, AI/ML, Storage, and 5G/Edge, today unveiled its upcoming system portfolio powered by the NVIDIA Vera Rubin platform. As data centers transform into AI factories, producing intelligence at massive scale, agentic reasoning, long-context AI, and Mixture-of-Experts (MoE) workloads are driving demand for an entirely new class of compute and storage infrastructure. Supermicro's NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin NVL8, NVIDIA Vera CPU systems are being designed and built with Supermicro's Data Center Building Block Solutions (DCBBS) advanced liquid-cooling technology stack to accelerate time-to-market for customers.
"We are entering a new era where every organization requires an AI factory to win in the marketplace, as the demand for inference workloads is reshaping what data center infrastructure must deliver," said Charles Liang, president and CEO of Supermicro. "Supermicro's DCBBS technology stack is being engineered to empower upcoming NVIDIA Vera Rubin NVL72, HGX Rubin NVL8, and Vera CPU systems to give our customers a fast, clear path to deploying next-generation AI factories, at scale. We are excited to provide an early look at these solutions as a testament to being first to market with the infrastructure that will power the next frontier of AI."
For more information visit NVIDIA Vera Rubin | Supermicro.
Supermicro DCBBS for NVIDIA Vera Rubin and Rubin Platforms
Delivering AI factory performance at scale requires much more than just compute — It demands power, cooling, and networking infrastructure that performs seamlessly. Supermicro's modular DCBBS approach enables data center operators to deploy validated, pre-engineered rack solutions rather than custom-building infrastructure for each project — reducing time-to-online, minimizing integration risk, and lowering total cost of ownership across AI factory deployments of any scale.
Supermicro's DCBBS are engineered specifically to meet the evolving thermal, power, and networking demands needed to enable rapid and robust deployment of upcoming NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin, and NVIDIA Vera CPU infrastructure. To meet the needs of Vera Rubin platforms, which will be fully liquid-cooled from this generation forward, DCBBS includes a full suite of validated liquid-cooling infrastructure. This expansion includes in-rack and in-row components such as coolant distribution units (CDUs), manifolds, and liquid-to-air sidecar. Also included are infrastructure solutions such as cooling towers and cabling design and implementation services — designed to integrate seamlessly with Supermicro's next-generation system portfolio.
Supermicro NVIDIA Vera Rubin NVL72 SuperCluster
Supermicro is engineering its NVIDIA Vera Rubin NVL72 with new DCBBS liquid-cooling components to fully support the power and thermal envelope at rack and cluster scale. This includes the manufacturing of optimized NVIDIA MGX racks, in-rack or in-row CDU, RDHx and L2A sidecar to streamline production and deployment of the rack-scale AI supercomputer at scale. The Vera Rubin NVL72 operates as a single rack-scale accelerator, unifying six co-designed chips — Rubin GPU, Vera CPU, NVIDIA NVLink 6, NVIDIA ConnectX-9 SuperNIC, NVIDIA BlueField-4 DPU, and NVIDIA Spectrum-X Ethernet — to deliver up to 3.6 Exaflops of inference, 75TB of fast memory, and 1.6 PB/s of HBM4 bandwidth, targeting up to 10x the throughput per watt and one-tenth the token cost compared to NVIDIA Blackwell.
NVIDIA HGX Rubin NVL8 System
The 2U HGX Rubin NVL8 system provides the densest and most flexible HGX platform — and the first HGX platform to offer greater flexibility in CPU selections including NVIDIA Vera CPUs alongside next-generation AMD and Intel x86 processors. Built on the NVIDIA MGX rack architecture with Supermicro's blind mate busbar and manifold for tool-free rack integration, it gives customers the freedom to pair eight Rubin GPUs with the CPU platform that best fits their workload and software stack.
The design supports 9 HGX Rubin NVL8 systems per rack — up to 72 Rubin GPUs total — for large-scale AI training, inference, and accelerated HPC. DCBBS provides in-rack CDU, in-row CDU, RDHx and an optional Liquid-to-air (L2A) sidecar for customers deploying in liquid-cooled or air-cooled data center environments.
NVIDIA Vera CPU System with RTX PRO
Supermicro's Vera CPU next-generation agentic AI system is being engineered as a versatile AI compute node for organizations targeting next-generation agentic AI deployments. The system features dual NVIDIA Vera CPUs supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs in a compact 2U chassis — delivering the compute density and energy efficiency that enterprise AI inference, agentic workloads, visualization, and adding accelerated computing to all enterprise workloads. It is a high-bandwidth LPDDR5X memory subsystem and PCIe GPU acceleration in a space-efficient footprint.
NVIDIA BlueField-4 STX Context Memory Storage Platform
Supermicro's upcoming Context Memory Storage Platform (CMX) introduces a new class of AI-native storage for context memory — architected as an intelligent pod-level context memory storage tier that extends GPU KV cache capacity and serves long-context inference data at the throughput that Vera Rubin NVL72 super pod clusters demand. Powered by NVIDIA BlueField-4 processor, NVIDIA Vera CPUs, NVIDIA ConnectX-9 SuperNICs, Spectrum-X Ethernet NVIDIA DOCA, and NVIDIA Dynamo, the system provides the high-bandwidth, low-latency fabric and intelligent data path offload that large-scale AI inference pipelines and RAG workloads require.
Supermicro NVIDIA Blackwell Solutions — Available Now
With next-generation systems in rapid development, Supermicro's current portfolio of NVIDIA Blackwell-based systems is in full production and available for immediate deployment through Supermicro's US and global manufacturing capacity, enabling customers to build and scale production AI infrastructure today. Supermicro is investing across both its current Blackwell lineup and its next-generation systems to ensure customers have the right platform at every stage of this transformation.
Visit Supermicro at GTC San Jose 2026
Supermicro will be unveiling early previews of its Vera Rubin platform systems alongside its current production Blackwell portfolio. Supermicro experts will be available to discuss current procurement options, roadmap planning, and deployment timelines for both near-term and next-generation AI infrastructure in Supermicro booth #1113.
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
- Supermicro's NVIDIA Vera Rubin NVL72 and HGX Rubin NVL8 systems are built on the DCBBS liquid-cooling stack, targeting up to 10x throughput per watt and one-tenth the token cost, compared to NVIDIA Blackwell solutions.
- Supermicro's 2U HGX Rubin NVL8 system is the most flexible platform supporting NVIDIA Vera and next-generation x86 CPUs, scaling to 72 Rubin GPUs per rack, as well as a DCBBS Liquid-to-Air (L2A) Sidecar CDU option for data centers without liquid-cooling.
- Supermicro's new NVIDIA Vera CPU systems include a 2U server supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs and a new AI storage system for context memory extension integrated with NVIDIA BlueField-4 DPU.
SAN JOSE, Calif., March 17, 2026 /PRNewswire/ -- Super Micro Computer, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud Computing, AI/ML, Storage, and 5G/Edge, today unveiled its upcoming system portfolio powered by the NVIDIA Vera Rubin platform. As data centers transform into AI factories, producing intelligence at massive scale, agentic reasoning, long-context AI, and Mixture-of-Experts (MoE) workloads are driving demand for an entirely new class of compute and storage infrastructure. Supermicro's NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin NVL8, NVIDIA Vera CPU systems are being designed and built with Supermicro's Data Center Building Block Solutions (DCBBS) advanced liquid-cooling technology stack to accelerate time-to-market for customers.
"We are entering a new era where every organization requires an AI factory to win in the marketplace, as the demand for inference workloads is reshaping what data center infrastructure must deliver," said Charles Liang, president and CEO of Supermicro. "Supermicro's DCBBS technology stack is being engineered to empower upcoming NVIDIA Vera Rubin NVL72, HGX Rubin NVL8, and Vera CPU systems to give our customers a fast, clear path to deploying next-generation AI factories, at scale. We are excited to provide an early look at these solutions as a testament to being first to market with the infrastructure that will power the next frontier of AI."
For more information visit NVIDIA Vera Rubin | Supermicro.
Supermicro DCBBS for NVIDIA Vera Rubin and Rubin Platforms
Delivering AI factory performance at scale requires much more than just compute — It demands power, cooling, and networking infrastructure that performs seamlessly. Supermicro's modular DCBBS approach enables data center operators to deploy validated, pre-engineered rack solutions rather than custom-building infrastructure for each project — reducing time-to-online, minimizing integration risk, and lowering total cost of ownership across AI factory deployments of any scale.
Supermicro's DCBBS are engineered specifically to meet the evolving thermal, power, and networking demands needed to enable rapid and robust deployment of upcoming NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin, and NVIDIA Vera CPU infrastructure. To meet the needs of Vera Rubin platforms, which will be fully liquid-cooled from this generation forward, DCBBS includes a full suite of validated liquid-cooling infrastructure. This expansion includes in-rack and in-row components such as coolant distribution units (CDUs), manifolds, and liquid-to-air sidecar. Also included are infrastructure solutions such as cooling towers and cabling design and implementation services — designed to integrate seamlessly with Supermicro's next-generation system portfolio.
Supermicro NVIDIA Vera Rubin NVL72 SuperCluster
Supermicro is engineering its NVIDIA Vera Rubin NVL72 with new DCBBS liquid-cooling components to fully support the power and thermal envelope at rack and cluster scale. This includes the manufacturing of optimized NVIDIA MGX racks, in-rack or in-row CDU, RDHx and L2A sidecar to streamline production and deployment of the rack-scale AI supercomputer at scale. The Vera Rubin NVL72 operates as a single rack-scale accelerator, unifying six co-designed chips — Rubin GPU, Vera CPU, NVIDIA NVLink 6, NVIDIA ConnectX-9 SuperNIC, NVIDIA BlueField-4 DPU, and NVIDIA Spectrum-X Ethernet — to deliver up to 3.6 Exaflops of inference, 75TB of fast memory, and 1.6 PB/s of HBM4 bandwidth, targeting up to 10x the throughput per watt and one-tenth the token cost compared to NVIDIA Blackwell.
NVIDIA HGX Rubin NVL8 System
The 2U HGX Rubin NVL8 system provides the densest and most flexible HGX platform — and the first HGX platform to offer greater flexibility in CPU selections including NVIDIA Vera CPUs alongside next-generation AMD and Intel x86 processors. Built on the NVIDIA MGX rack architecture with Supermicro's blind mate busbar and manifold for tool-free rack integration, it gives customers the freedom to pair eight Rubin GPUs with the CPU platform that best fits their workload and software stack.
The design supports 9 HGX Rubin NVL8 systems per rack — up to 72 Rubin GPUs total — for large-scale AI training, inference, and accelerated HPC. DCBBS provides in-rack CDU, in-row CDU, RDHx and an optional Liquid-to-air (L2A) sidecar for customers deploying in liquid-cooled or air-cooled data center environments.
NVIDIA Vera CPU System with RTX PRO
Supermicro's Vera CPU next-generation agentic AI system is being engineered as a versatile AI compute node for organizations targeting next-generation agentic AI deployments. The system features dual NVIDIA Vera CPUs supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs in a compact 2U chassis — delivering the compute density and energy efficiency that enterprise AI inference, agentic workloads, visualization, and adding accelerated computing to all enterprise workloads. It is a high-bandwidth LPDDR5X memory subsystem and PCIe GPU acceleration in a space-efficient footprint.
NVIDIA BlueField-4 STX Context Memory Storage Platform
Supermicro's upcoming Context Memory Storage Platform (CMX) introduces a new class of AI-native storage for context memory — architected as an intelligent pod-level context memory storage tier that extends GPU KV cache capacity and serves long-context inference data at the throughput that Vera Rubin NVL72 super pod clusters demand. Powered by NVIDIA BlueField-4 processor, NVIDIA Vera CPUs, NVIDIA ConnectX-9 SuperNICs, Spectrum-X Ethernet NVIDIA DOCA, and NVIDIA Dynamo, the system provides the high-bandwidth, low-latency fabric and intelligent data path offload that large-scale AI inference pipelines and RAG workloads require.
Supermicro NVIDIA Blackwell Solutions — Available Now
With next-generation systems in rapid development, Supermicro's current portfolio of NVIDIA Blackwell-based systems is in full production and available for immediate deployment through Supermicro's US and global manufacturing capacity, enabling customers to build and scale production AI infrastructure today. Supermicro is investing across both its current Blackwell lineup and its next-generation systems to ensure customers have the right platform at every stage of this transformation.
Visit Supermicro at GTC San Jose 2026
Supermicro will be unveiling early previews of its Vera Rubin platform systems alongside its current production Blackwell portfolio. Supermicro experts will be available to discuss current procurement options, roadmap planning, and deployment timelines for both near-term and next-generation AI infrastructure in Supermicro booth #1113.
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **
Supermicro Reveals DCBBS® with New NVIDIA Vera Rubin NVL72, HGX Rubin NVL8, and Vera CPU Systems, Designed to Accelerate Customer Time-to-Market