Tag Archives: GPU

Gamers Nexus could be the first of many reviewers and content creators fed up with Nvidia’s behavior

In the ever-evolving world of tech journalism, transparency and integrity are paramount. However, recent allegations from Gamers Nexus suggest that Nvidia may be engaging in questionable media practices to control the narrative surrounding its latest GPU releases. This controversy has sparked discussions about corporate influence, journalistic ethics, and the delicate balance between access and independence.

The Accusations: Nvidia’s Alleged Media Pressure Tactics

Gamers Nexus, a well-respected voice in the PC hardware community, has accused Nvidia of pressuring reviewers to include performance metrics for features like Multi-Frame Generation 4X (MFG4X) in their reviews—even when the tested hardware does not support the feature. According to Gamers Nexus, Nvidia allegedly threatened to withhold access to key engineers and internal resources if reviewers did not comply with these demands.

This revelation raises concerns about editorial coercion, where companies leverage their influence to shape public perception. If true, such tactics could undermine the credibility of independent reviews and mislead consumers who rely on unbiased assessments before making purchasing decisions.

The allegations against Nvidia are not just about one company—they highlight a larger issue within the tech industry. When corporations dictate review conditions, they compromise journalistic integrity and erode trust between media outlets and their audiences.

Reports indicate that Nvidia has been selectively providing review drivers only to media outlets that agreed to its strict testing conditions. This means that early reviews of the RTX 5060 were conducted under Nvidia’s preferred benchmarks, potentially skewing the results in favor of the company’s narrative. Such practices raise ethical concerns about whether reviewers can truly provide objective assessments when their access is contingent upon compliance.

The tech community plays a crucial role in challenging corporate influence and demanding transparency. Independent reviewers like Gamers Nexus serve as watchdogs, exposing practices that could mislead consumers. By supporting unbiased journalism and engaging in open discussions, the community can push back against manipulative tactics.

Consumers should also be critical of marketing narratives and seek multiple sources before making purchasing decisions. The more informed the audience, the harder it becomes for corporations to control the conversation.

At its core, this controversy is a reminder that corporate influence should never outweigh journalistic integrity. Nvidia’s alleged tactics, if proven true, highlight the power dynamics between tech giants and independent media. As the industry continues to evolve, maintaining transparency and ethical standards will be essential in preserving trust between companies, reviewers, and consumers.

The situation kind of reminds that Nvidia have been in hot waters a few years ago, while a different situation, it did cost a complete distrust from a known partner.

In 2022, EVGA, one of Nvidia’s largest board partners, abruptly exited the GPU market, citing disrespect from Nvidia. According to reports, Nvidia refused to provide basic pre-launch information—such as pricing—until CEO Jensen Huang publicly announced the GPUs. This made it difficult for EVGA to plan its business strategy.

Additionally, Nvidia allegedly restricted pricing on certain cards, while simultaneously releasing Founders Edition GPUs that undercut EVGA’s own products. The oversupply of GPUs further forced EVGA to drastically cut prices, leading to financial losses. Ultimately, EVGA deemed the partnership unprofitable and walked away from the GPU business.

Intel presents the Arc Pro B-Series for more Itel Arc GPUs offering

Intel has unveiled its latest Arc Pro B-Series graphics cards at Computex 2025, introducing the Arc Pro B60 and Arc Pro B50 GPUs, designed for workstation applications and AI inference. These new GPUs are built on Intel’s Xe2 Battlemage architecture, featuring Intel Xe Matrix Extensions (XMX) AI cores and hardware-accelerated ray tracing units.

Key Features of Arc Pro B-Series GPUs

  • Arc Pro B60:
    • 24GB GDDR6 memory
    • 197 TOPS of AI performance
    • PCIe 5.0 x8 interface
    • Multi-GPU scalability for AI workloads
    • Targeted for demanding tasks like generative design, 3D simulation, and video editing.
  • Arc Pro B50:
    • 16GB GDDR6 memory
    • 170 TOPS of AI performance
    • Compact dual-slot design
    • Priced at $299, making it an affordable option for professionals.

Intel has also expanded its Gaudi 3 AI accelerator lineup, offering PCIe add-in cards and rack-scale server modules to support large-scale AI inferencing. The Project Battlematrix platform enables multi-GPU configurations, allowing up to eight B60 cards to work together, providing 192GB of video memory for AI models with up to 150 billion parameters.

Intel aims to challenge NVIDIA and AMD in the workstation segment by offering high memory capacities at competitive prices. The Arc Pro B50 and B60 GPUs will be available in Q3 2025, with full feature enablement scheduled for Q4 2025. Intel is collaborating with ASRock, Gunnir, Maxsun, Sparkle, and other partners to bring these GPUs to market.

These new GPUs mark a significant step forward for Intel in the professional graphics and AI computing space, reinforcing its commitment to open architectures and scalable AI solutions.

AMD Radeon RX 9070 and 9070 XT are officially (re)announced

AMD has officially unveiled its latest graphics card, the Radeon RX 9070 XT, alongside its sibling, the RX 9070. These GPUs are built on AMD’s cutting-edge RDNA 4 architecture, promising a blend of high performance, efficiency, and affordability. Here’s everything you need to know about this exciting release.

Price and Availability

The Radeon RX 9070 XT is priced at $599, while the RX 9070 starts at $549. Both models are set to hit the market globally on March 6, 2025. This aggressive pricing strategy positions AMD as a strong competitor to NVIDIA, particularly against the RTX 5070 Ti, which is priced at $749.

The RX 9070 XT boasts impressive performance metrics:

  • Up to 51% faster than its predecessor, the RX 6900 XT, in 4K gaming.
  • Enhanced ray tracing capabilities with third-generation ray accelerators.
  • Improved AI-driven upscaling, thanks to second-generation AI accelerators.

AMD claims that the RX 9070 XT delivers comparable performance to NVIDIA’s RTX 5070 Ti but at a significantly lower price point. This makes it an attractive option for gamers seeking high-end performance without breaking the bank.

Key Features

  1. RDNA 4 Architecture: The new architecture introduces dynamic register allocation for better resource utilization and enhanced compute units for improved parallel processing.
  2. Memory Management: Optimized memory bandwidth usage ensures faster data processing and reduced latency.
  3. FidelityFX Super Resolution 4 (FSR 4): The latest iteration of AMD’s upscaling technology offers sharper visuals and higher frame rates, making it ideal for demanding gaming scenarios.

AMD’s pricing strategy and technological advancements send a clear message to the GPU market: high-performance gaming doesn’t have to come at a premium price. By undercutting NVIDIA’s pricing while delivering comparable or superior performance, AMD is poised to dominate the mid-to-high-end GPU segment in 2025.

Fidelity Super Resolution 4 is an RDNA4 exclusive

AMD has also re-confirmed that FSR 4 utilizes FP8 capabilities of RDNA 4’s 2nd gen AI accelerators, which means older generation will have to resort in the possibility of backporting FSR 4 or its subsets to older cards, but the alternative is currently unknown if AMD is doing the effort.

What is official is that FSR 4 promises to offer up to a 3.7x fps boost at 4K with ray tracing enabled. Essentially, Hypr-RX enables all aforementioned features for a game with a single click in the Adrenalin driver.

RDNA 4 cards will ship with Radeon Software Adrenalin Edition 25.3.1 that offers a few nifty AI-powered features while largely retaining the familiar interface.

The latest Adrenalin offers Radeon Image Sharpening 2 that offers system-wide image sharpening without reliance on any third-party API. There’s support for up to 8K 75 fps video codec acceleration and hardware flip metering, leveraging the changes to the media engine in RDNA 4.

AMD is also bundling a few utilities with Adrenalin 25.3.1 including AMD Chat, Image Inspector, and AI Apps Manager.

Nvidia Ends PhysX Support with RTX 50 Series

Nvidia has officially retired 32-bit PhysX support on its latest RTX 50 series GPUs, marking the end of an era for the once heavily marketed physics simulation technology. This move comes as Nvidia deprecates 32-bit CUDA applications starting with the RTX 50 series.

PhysX, originally developed by Ageia in 2004 and later acquired by Nvidia, was a proprietary physics simulation SDK capable of processing ragdolls, cloth simulation, particles, volumetric fluid simulation, and other physics-focused graphical effects. It was integrated into several notable AAA games, including the Batman Arkham trilogy, Borderlands: The Pre-Sequel, Borderlands 2, Metro: Last Light, Metro: Exodus, Metro 2033, Mirror’s Edge, The Witcher 3, and some older Assassin’s Creed titles.

PhysX was designed to run physics calculations on the GPU rather than the CPU, allowing for significantly greater rendering performance for physics-related graphical effects. This resulted in higher frame rates and improved quality of physics effects compared to what could be achieved on a CPU.

Despite its initial success, PhysX’s adoption slowed significantly by the late 2010s as developers moved towards more flexible, cross-platform physics engines. The biggest drawback of PhysX was its strict requirement for an Nvidia GPU, preventing it from being used on competing GPUs, consoles, and smartphones. Nvidia also gradually removed support for some PhysX features, contributing to its decline.

The End of PhysX on RTX 50 Series

With no known 64-bit games using PhysX, Nvidia has decided to end support for 32-bit PhysX on the RTX 50 series GPUs. This means that games from the 2000s and early 2010s that relied on PhysX for particle and clothing effects will no longer benefit from the technology on the latest Nvidia GPUs.

For those who still want to use PhysX, the only solution is to install an older RTX 40 series or earlier graphics card and dedicate it to PhysX processing in the Nvidia control panel.

The retirement of PhysX on the RTX 50 series marks the end of an almost lifelike era in gaming physics. While it was a groundbreaking technology in its prime, the shift towards more versatile and cross-platform solutions has rendered it obsolete. As we move forward, it will be interesting to see what new innovations Nvidia brings to the table.

The Nvidia GeForce RTX 50 Family announced

The rumors are now over at Nvidia side and we are in the new generation of GeForce RTX with GeForce RTX 50 family and debuting are the Nvidia GeForce RTX 5090, GeForce RTX 5080, GeForce RTX 5070 Ti and the GeForce RTX 5070 which were presented at Nvidia’s turn at the main stage of CES 2025.

Lets start with the GeForce RTX 5090 and this of course and for yet another generation without Blackwell architecture being a difference, the flagship.

The GeForce RTX 5090 includes 32GB of GDDR7, a memory bandwidth of 1,792GB/sec, and a massive 21,760 CUDA cores and the bad news is that the GPU card will consume up to 575 watts and Nvidia recommends using PSU with 1000 watts as minimum.

That’s 125 watts more than the RTX 4090, but Nvidia justifies it as precisely, the 5090 doubles its predecessor’s performance.

The GeForce RTX 5090 will have a Founder Edition or the reference that Nvidia sells directly to customer.

Next, the GeForce RTX 5080 which will be aimed directly to consumer 4K gaming resolution, include 16GB of GDDR7 memory, a memory bandwidth of 960GB/sec, and 10,752 CUDA cores. The RTX 5080 will have a total graphics power of 360 watts and Nvidia is recommending a 850-watt power supply.

As the situation for the 5090, Nvidia claims that it doubles the performance of GeForce RTX 4080 of 2022.

In my opinion, the surprise of the keynote was the announcement of both the GeForce RTX 5070 and the RTX 5070 Ti, normally a model serving as a refresher of the base model.

Nevertheless, the RTX 5070 has 12GB of GDDR7, a memory bandwidth of 672 GB/sec, and 6,144 CUDA cores and meanwhile, the GeForce RTX 5070 Ti includes 16GB of GDDR7 memory, a memory bandwidth of 896GB/s, and 8,960 CUDA cores.

Being already a repetitive claim, Nvidia says that both RTX 5070 and the Ti will be 2x faster than their respective predecessors and yes, lest talk about power consumption with the RTX 5070 Ti will have a total graphics power of 300 watts and require a 750-watt PSU, while the RTX 5070 has a total graphics power of 250 watts and only needs a 650-watt PSU.

All mentioned GPU cards will have a Founders Edition and expect Nvidia’s partner to have their own designs.

MSRP prices for the Founder editions goes as follow:

  • RTX GeForce 5090 – $1999
  • RTX GeForce 5080 – $999
  • RTX GeForce 5070 Ti – $759
  • RTX GeForce 5070 – $549

Nvidia confirmed a January 30th release for each individual Founders Edition, meanwhile partner’s variants availability were not immediately shared.

About the novelties of Blackwell architecture

The Blackwell architecture enabled Nvidia to have the hardware foundation for the next generation of it solution for AI based graphical upscaler, the DLSS 4, improving massively on ray tracing processing and Multi Frame Generation, which generates up to three additional frames per traditional frame and can multiply frame rates.

DLSS 4 also includes a real-time application of transformers to improve image quality, reduce ghosting, and add higher detail in motion a big differential with today’s AMD announcement on FSR 4, Nvidia opted to have DLSS 4 available for 2022’s Lovelace architecture.

Also, Nvidia demoed what they call RTX Neural Shaders, RTX Neural Faces, text to animation and starting with the first one, the job of the RTX Neural Shaders is to compress textures in games, while RTX Neural Faces aim to improve face quality using generative AI.

RTX 50 for gaming Laptops

Nvidia also confirmed that the mobile version of the RTX GeForce 50 family are coming this year for gaming laptop and includes the the RTX 5090 laptop GPU debuting with 24GB of GDDR7 memory. The RTX 5080 laptop GPU will ship with 16GB of GDDR7 memory, the RTX 5070 Ti with 12GB of GDDR7 memory, and the RTX 5070 with just 8GB of GDDR7 memory. 

We should be hearing announcement soon and releases as early as March 2025.

Intel found its unlikely hero with Arc

In a year where Intel faced numerous challenges, the launch of their new Arc B580 Battlemage GPU has turned out to be a surprising triumph. Released just in time for the holiday season, the Arc B580 has quickly captivated the gaming community, selling out at major retailers such as Amazon and Newegg within mere days of its release.

Priced at an affordable $249, the Arc B580 has managed to offer a competitive edge in a market typically dominated by Nvidia and AMD. This GPU packs a punch with its impressive specifications, featuring 12GB of GDDR6 VRAM, a 256-bit memory interface, and a boost clock of 2.3 GHz. These features enable it to deliver a robust performance in 1440p gaming, rivaling the Nvidia GeForce RTX 4060 and AMD Radeon RX 7600, but at a more budget-friendly price point.

Performance and Features

One of the standout features of the Arc B580 is its efficiency in balancing power consumption with performance. Intel’s Xe-HPG architecture, the backbone of the Battlemage series, has brought significant improvements in ray tracing and AI-based supersampling. Early benchmarks indicate that the Arc B580 performs exceptionally well in modern titles like Cyberpunk 2077, Call of Duty: Modern Warfare II, and Assassin’s Creed Valhalla, maintaining a consistent frame rate even at high settings.

The gaming community’s reception to the Arc B580 has been overwhelmingly positive. Reviews praise its value for money, performance per watt, and the effective cooling solution that keeps the GPU running at optimal temperatures even under heavy load. Moreover, the GPU’s support for the latest DirectX 12 Ultimate and Vulkan APIs ensures that it remains future-proof for upcoming game releases.

This unexpected success couldn’t have come at a better time for Intel, which has faced significant hurdles in the CPU market and organizational changes, including the recent departure of CEO Pat Gelsinger. The Arc B580’s success has provided a much-needed morale boost and renewed confidence in Intel’s ability to innovate in the GPU space.

Intel has announced that they are ramping up production and working closely with partners to ensure steady inventory replenishments. This move aims to meet the high demand and prevent prolonged shortages that have often plagued new hardware launches.

Looking Ahead

As we approach CES 2025, the industry buzz is already building around Intel’s roadmap for their Battlemage GPUs. With Nvidia gearing up to launch their RTX 5000 series, it will be fascinating to see how Intel’s Battlemage series stands against the competition. Rumors suggest that Intel is also preparing to introduce even more powerful variants of the Battlemage GPU, which could further solidify their position in the market.

Intel’s Arc B580 Battlemage GPU has not only met but exceeded expectations, proving to be a formidable contender in the competitive GPU market. Its blend of performance, affordability, and advanced features make it a compelling choice for gamers and tech enthusiasts alike. This launch marks a significant milestone for Intel, signaling a potential shift in the landscape of the graphics card industry.