GTA 5’s Graphics Engine: How a 2013 Game Still Looks Good in 2026

Grand Theft Auto V launched in September 2013 on PlayStation 3 and Xbox 360 hardware, consoles with 512MB of RAM and processors from 2005. Thirteen years later, the game not only survives but thrives across three console generations and PC, maintaining visual competitiveness against modern titles. This longevity stems from Rockstar’s RAGE (Rockstar Advanced Game Engine) technology, a sophisticated graphics and physics engine that was over-engineered for its time and designed with scalability as a core principle.

The RAGE Engine Foundation

RAGE debuted with Rockstar Table Tennis in 2006, but GTA 5 represents its most ambitious implementation. The engine combines proprietary rendering technology with Euphoria physics simulation and procedural animation systems licensed from NaturalMotion. This hybrid approach creates the realistic character movement and environmental interaction that define the GTA experience.

What makes RAGE particularly impressive is its scalability. The same codebase runs on hardware spanning four orders of magnitude in computational power, from 2005 console processors to modern RTX 4090 graphics cards. This requires sophisticated dynamic level of detail systems, adaptive texture streaming, and resolution-independent rendering pipelines that few engines achieve even today.

The engine’s renderer employs deferred shading, a technique that separates geometry rendering from lighting calculations. This allows Los Santos to feature hundreds of dynamic light sources simultaneously without crippling performance. Street lights, vehicle headlights, neon signs, and environmental effects all contribute to lighting in real time, creating the atmospheric depth that keeps the game visually engaging over a decade after release.

Texture Streaming and Memory Management

GTA 5’s massive open world presents extreme technical challenges. Los Santos covers approximately 127 square kilometers, filled with detailed buildings, vegetation, roads, and thousands of assets. Loading this entire world into memory is impossible even on modern hardware, requiring sophisticated streaming systems that predict player movement and preload assets accordingly.

Rockstar’s texture streaming technology analyzes player velocity, camera direction, and historical movement patterns to determine which assets need high resolution textures and which can use lower quality versions. This predictive loading happens continuously in the background, invisible to players but critical to maintaining visual quality without loading screens during open world traversal.

The system’s intelligence becomes apparent when players move at high speeds. Textures and geometry ahead of the player load at higher priority than assets behind them. Buildings in the player’s peripheral vision receive medium detail, while structures directly in the view cone get full resolution treatment. This selective quality approach maximizes perceived visual fidelity while staying within hardware constraints.

PC versions leverage additional VRAM to extend streaming distances and maintain higher resolution textures longer, but the fundamental systems remain identical across platforms. This unified architecture simplifies development while allowing each platform to scale performance according to available resources.

Dynamic Resolution and Temporal Anti-Aliasing

Modern GTA 5 implementations on PlayStation 5 and Xbox Series X employ dynamic resolution scaling, adjusting rendering resolution frame by frame to maintain target framerates. When on-screen complexity increases, such as during explosions or high traffic density, the engine reduces rendering resolution slightly. During calmer moments, it scales back up to native resolution.

This technique, combined with temporal anti-aliasing that uses information from previous frames to smooth edges and reduce aliasing artifacts, creates the illusion of consistent high-resolution rendering even when internal resolution fluctuates. Most players never notice these adjustments, experiencing only smooth performance regardless of on-screen chaos.

Temporal anti-aliasing also helps with the thin geometry that plagues open world games: power lines, fences, railings. Traditional anti-aliasing struggles with single-pixel-wide objects that flicker and shimmer during movement. By analyzing multiple frames, temporal solutions stabilize these problematic elements, significantly improving visual stability during gameplay.

Lighting and Atmospheric Effects

GTA 5’s time of day system demonstrates the engine’s lighting sophistication. The game simulates a complete 24-hour cycle with dynamic sun position, atmospheric scattering, and color temperature shifts that affect all lighting in the scene. Sunrise and sunset periods feature particularly impressive volumetric light scattering, creating god rays that stream through clouds and between buildings.

Weather systems add another layer of complexity. Rain doesn’t just add particle effects; it transforms surface properties, creating wet road reflections, changing friction characteristics for vehicles, and affecting visibility through atmospheric fog. These interconnected systems create believable environmental conditions that enhance immersion beyond simple visual spectacle.

The volumetric fog and cloud rendering use ray marching techniques, sampling atmospheric density at multiple depths to calculate light scattering through the medium. This computationally expensive approach was cutting edge in 2013 and remains impressive today, contributing to the game’s distinctive visual atmosphere.

The Player Investment Factor

GTA Online’s persistent nature creates an interesting technical challenge and opportunity. Players invest hundreds or thousands of hours building their criminal empires, accumulating properties, vehicles, and customization options. This long-term engagement justifies Rockstar’s continued technical support and optimization across new hardware platforms.

Services like Gameboost and marketplaces for gta accounts exist partly because the technical investment Rockstar made in the engine allows the game to remain relevant across hardware generations. Players can transfer their accounts from old consoles to new ones, maintaining their progress while experiencing improved graphics and performance on superior hardware.

Future-Proofing Through Modularity

RAGE’s modular architecture allows Rockstar to update specific rendering components without rebuilding the entire engine. The PC version has received multiple graphics updates post-launch, adding features like improved anti-aliasing, enhanced shadow resolution, and higher quality texture filtering. These improvements slot into the existing framework because the engine was designed with modularity from inception.

This approach contrasts with engines that tightly couple rendering and gameplay code, making updates risky and time-consuming. RAGE’s separation of concerns allows graphics programmers to optimize rendering paths while gameplay engineers work on different systems simultaneously, accelerating development and enabling incremental improvements over years.

The Technical Debt Question

No engine survives 13 years without accumulating technical debt. RAGE shows its age in certain areas, particularly texture pop-in during fast travel and occasional geometry streaming issues when pushing hardware limits. The engine’s multi-platform origins create compromises that a ground-up modern engine wouldn’t face.

However, the consistency of these issues across platforms suggests they’re fundamental to the streaming approach rather than implementation bugs. Rockstar has clearly decided that occasional texture loading artifacts are acceptable trade-offs for the seamless open world experience that defines GTA gameplay.

Lessons for Modern Engine Design

GTA 5’s longevity offers valuable lessons for graphics engine architecture. Over-engineering for future hardware proves worthwhile when supporting a live service game across multiple console generations. Sophisticated streaming systems that seemed excessive on 2013 hardware enable the game to scale smoothly to modern platforms with dramatically more memory and processing power.

The engine demonstrates that photorealistic graphics matter less than consistent visual quality and strong art direction. Los Santos succeeds not because it renders more polygons than competitors, but because its lighting, atmospheric effects, and attention to detail create a convincing world that players want to inhabit.

As the industry shifts toward games-as-a-service models requiring multi-year support, GTA 5‘s technical foundation shows the value of building scalable, modular engines designed for evolution rather than obsolescence. The game’s continued commercial success validates this technical investment, proving that well-engineered fundamentals outlive cutting-edge features targeting specific hardware.

 

nPerf launches the first 4K streaming test

At nPerf, their objective is to provide the most complete and objective view possible of network performance, in order to contribute to better connectivity for everyone.

This commitment requires continuous adaptation of our tools to real-world usage.

Streaming at the heart of the user experience

Watching an online video is not limited to connection speed alone. Perceived quality depends on smooth, stable playback that matches the requested resolution. This is exactly what the nPerf streaming test measures, by evaluating the overall quality of experience as it is truly lived by users.

In response to the widespread adoption of high resolutions, nPerf has evolved its test to remain faithful to current usage and to its mission of objectivity.

4K integrated into the streaming test

The nPerf streaming test now supports 2160p (4K) resolution.

This evolution makes it possible to analyze more precisely the ability of networks to support some of the most demanding video use cases.

At the same time, the measured resolutions have been refocused on formats that are actually used today:

720p (HD Ready)

1080p (Full HD)

2160p (4K or UHD)

A faster test, with no compromise on quality

This evolution also brings a direct improvement for users. The new accelerated “Fast Forward” mode now makes it possible to speed up video playback in 720p (HD Ready) and 1080p (Full HD), while maintaining a precise and representative level of measurement. This feature divides the total duration of the streaming test by two.

A time saver, with no compromise on the reliability of the results.

Available on all platforms

This new 2160p (4K) streaming test is now available across all nPerf applications: AndroidiOS and Desktop.

nPerf – Turning individual measurements into collective progress

Maono PD200W wireless dynamic microphone review

The Maono PD200W stands out as one of the most versatile and budget-friendly microphones available for creators, streamers, and podcasters. By successfully merging a professional-grade dynamic capsule with 2.4 GHz wireless functionality, it delivers exceptional flexibility without compromising on audio quality or features.

The “Hybrid” moniker is well-earned, as the PD200W offers a trifecta of connectivity options, ensuring it fits into nearly any recording setup:

  1. 2.4 GHz Wireless: This is the headline feature. The mic operates wirelessly via a small receiver that plugs into your device (PC, phone, camera) via USB-C or a 3.5mm jack (depending on the receiver version). This allows for up to 60 meters of range and is perfect for on-the-go interviews, vlogging, and clean desktop setups.
  2. USB-C Digital: A simple plug-and-play connection to a PC, laptop, or phone, delivering high-resolution 24-bit/48kHz digital audio. This is ideal for streaming and studio work.
  3. XLR Analog: For professional setups, the XLR output allows connection to a mixer or audio interface, offering robust analog performance.

 

Audio Quality and Noise Suppression

The PD200W is designed to capture clear, focused vocals, a hallmark of dynamic microphones.

The 30mm dynamic capsule delivers a clean, intimate sound with strong vocal clarity. It excels at close-miking, which is essential for podcasting and streaming. Due to its cardioid polar pattern and dynamic design, the mic is naturally excellent at rejecting off-axis sound (like room echoes or background traffic). A major advantage is the built-in Digital Signal Processing (DSP) noise reduction, offering multiple levels (Weak, Medium, Strong). Reviewers have found this feature highly effective, keeping the focus locked onto the voice even in challenging real-world environments with noise like heaters or buzzing insects.

The Maono PD200W includes several features that make it a favorite for content creators:

With a claimed battery life of up to 60 hours (and around 38 hours with the RGB lighting active), it’s built for extended wireless sessions or long-duration remote recordings.

The companion software for Windows and Mac allows users to fine-tune their sound with an equalizer (EQ), compressor, and limiter—tools typically found in higher-end setups. It also includes the controls for the multi-level noise reduction. See app screenshots below and video review for more.

The microphone features 16.8 million colors of RGB lighting, which can be controlled via the software, appealing directly to gamers and streamers who want a visually engaging setup.

It supports connecting two PD200W microphones to a single receiver (2TX + 1RX), which is excellent for dual-host podcasts or interviews, recording each speaker on a separate channel for easier post-production.

The Maono PD200W is a near-perfect hybrid microphone for content creators on a budget who prioritize flexibility.

Overall, the Maono PD200W is highly recommended for streamers, remote workers, YouTubers, and podcasters who need a reliable, clean-sounding microphone that can adapt instantly between a fixed desktop setup and a mobile, cable-free recording scenario. It delivers studio-grade clarity without the studio price tag.

Maono App

 

 

Key Features 

  1. Triple-mode connectivity (2.4GHz wireless transmission, USB, and XLR) offering true plug-and-play convenience and freeing creators from the limitations of traditional wired microphones, while providing more versatile connection options than conventional dual-mode mics.
  2. Dual-channel stereo recording – a single receiver connects to two PD200W microphones simultaneously, capturing each signal separately on the left and right channels. This enables independent volume adjustment for each speaker in post-production and eliminates track-mixing issues in multi-person podcasts or interviews.
  3. Studio-grade sound quality – a major upgrade over ordinary microphones, with clarity that rivals the Shure MV7+.
    1. 48kHz/24-bit sampling for authentic, high-resolution sound
    2. 82dB signal-to-noise ratio for ultra-quiet recording with no unwanted noise
    3. 128dB max SPL for distortion-free performance even at high volumes
  4. Direct connection to smartphones, tablets, and cameras – going beyond traditional USB microphones by supporting mobile devices, allowing creators to start from entry-level gear, switch seamlessly across platforms, and reduce time spent on post-production.
  5. Multi-level noise reduction – minimizing environmental background sounds and room acoustics, delivering cleaner and more focused audio without the need for heavy post-processing.

Competitors

VS Shure MV7+

    1. Comparable sound quality to the MV7+, delivering a professional-grade listening experience.
    2. Unlike the wired-only setup of traditional multi-person podcasting, PD200W’s wireless mode offers a cleaner and more convenient solution.

VS Fifine K688 / AM8

    1. Compared to both the K688 and AM8, PD200W delivers more professional sound quality tailored for podcasting.
    2. Unlike these models, PD200W supports direct USB connections to smartphones and tablets, providing broader device compatibility and greater flexibility for creators.

Prices

PD200W (Desktop Stand Version): $99.99

PD200W (with BA37 Boom Arm): $129.99

PD200W (2TX+1RX with Desktop Stand): $189.99

BUY

Other Mic Reviews

Video Review

A Look Into Technology Used in Ground Penetrating Radar (GPR)

Ground Penetrating Radar (GPR) is a non-invasive geophysical method that uses electromagnetic radiation to image the subsurface. Over the past few decades, GPR technology has evolved significantly, allowing for high-resolution imaging in a variety of applications, from archaeology and civil engineering to military and environmental studies. This read explores the key technologies that make GPR effective, including its components, signal processing techniques, antenna types, and integration with modern innovations like AI and GPS.

 

  1. Fundamentals of GPR Technology

 

At its core, GPR operates by transmitting high-frequency radio waves (typically in the range of 10 MHz to 2.6 GHz) into the ground and analyzing the reflected signals from subsurface structures. The time it takes for the signals to return to the surface is recorded, and from this data, depth and material information can be inferred.

 

The key components of a GPR system include:

  • Antenna (transmitting and receiving)
  • Control unit
  • Display/processing system
  • Data storage system
  • Power supply

Each of these components plays a critical role in ensuring accurate, high-resolution subsurface imaging.

 

  1. Antenna Technology

 

  1. Shielded vs Unshielded Antennas

 

The antenna is the heart of a GPR system, responsible for emitting and receiving electromagnetic pulses. GPR antennas are generally classified into:

  • Shielded Antennas: Enclosed to minimize interference and used primarily in environments where clutter needs to be reduced, such as urban or archaeological sites.
  • Unshielded Antennas: Used in open areas like geophysical or geological surveys, offering greater range but more susceptible to interference.

 

  1. Frequency and Resolution

 

The frequency of the antenna determines the depth of penetration and the resolution:

  • Low-frequency antennas (10–400 MHz): Greater depth (up to 30 meters or more), lower resolution.
  • High-frequency antennas (500 MHz–2.6 GHz): Limited depth (up to 1–2 meters), higher resolution—ideal for locating rebar, utilities, or shallow artifacts.

 

  1. Data Acquisition Systems

 

Modern GPR systems utilize advanced control units that digitize analog signals and store them for processing. These units can operate with various antenna frequencies and are often capable of integrating multiple channels.

 

Key technologies include:

 

  • High-speed analog-to-digital converters (ADCs): Convert received signals into digital format with minimal loss.
  • Timing circuits: Ensure precise measurements of signal travel time, critical for depth estimation.
  • Onboard processing units: Allow real-time viewing and initial filtering of data, reducing post-processing time.

 

  1. Signal Processing and Imaging

 

Signal processing is central to GPR data interpretation. Raw GPR data consists of reflected waveforms that need to be cleaned, enhanced, and interpreted.

 

Common processing techniques include:

 

  • Time-zero correction: Aligns all reflections to a common starting point.
  • Dewow filtering: Removes low-frequency components unrelated to subsurface features.
  • Gain adjustment: Enhances deeper reflections that may have lower amplitudes.
  • Migration: Corrects for distortion caused by off-center reflections.
  • Background subtraction: Eliminates consistent noise patterns from the data.

Advanced imaging techniques, such as 3D volume rendering and amplitude slice mapping, allow for detailed interpretation, especially in complex or layered environments.

 

  1. Electromagnetic Wave Propagation

 

GPR relies on the principles of electromagnetic (EM) wave propagation. The velocity of EM waves in the ground depends on the material’s dielectric permittivity, which varies based on composition, moisture content, and density.

 

Key electromagnetic concepts used in GPR include:

  • Reflection coefficient: Determines how much of the signal is reflected at material boundaries.
  • Attenuation: Signal loss due to absorption and scattering in the ground.
  • Refraction and diffraction: Affect how signals bend and spread, influencing the clarity of images.

Building materials such as clay, saline water, or metals heavily attenuate signals, while dry sand or ice permits deeper penetration.

 

  1. Multi-Frequency and Step-Frequency GPR

 

Traditional GPR systems use fixed frequencies, but newer systems employ multi-frequency or step-frequency technology to improve resolution and depth simultaneously.

  • Multi-frequency GPR: Combines low and high-frequency antennas to balance depth and resolution in a single scan.
  • Step-frequency GPR (SFGPR): Sweeps across a wide range of frequencies, capturing more comprehensive data and enabling high-resolution spectral imaging.

SFGPR systems also reduce signal distortion and improve detection of small or subtle anomalies.

 

  1. Synthetic Aperture Radar (SAR) Techniques

 

Some GPR systems borrow from radar-based technologies such as Synthetic Aperture Radar (SAR) to improve lateral resolution. SAR techniques involve:

  • Moving the antenna along a track to simulate a large aperture.
  • Capturing multiple signals over time and synthesizing them into a coherent image.

This approach is particularly effective in vehicle-mounted or robotic GPR systems, where continuous scanning is feasible.

 

  1. Positioning and Mapping Integration

 

  1. GPS and GNSS

 

Accurate positioning is essential for mapping GPR data spatially. GPR systems are often integrated with:

  • GPS (Global Positioning System)
  • GNSS (Global Navigation Satellite Systems)

High-precision RTK (Real-Time Kinematic) GPS allows for centimeter-level accuracy, which is crucial for correlating anomalies with real-world locations, especially in civil engineering or archaeological applications.

 

  1. Geographic Information Systems (GIS)

 

GPR data is increasingly integrated into GIS platforms for spatial analysis and visualization. This allows users to overlay subsurface maps with surface infrastructure data, historical maps, or environmental data layers.

 

  1. Artificial Intelligence and Machine Learning

 

AI and ML are transforming GPR interpretation by automating data classification and feature detection. These technologies help identify patterns and anomalies that may be missed by human analysts.

 

Applications include:

  • Object detection (e.g., pipes, landmines, voids)
  • Layer classification (e.g., soil strata, pavement layers)
  • Anomaly recognition (e.g., buried artifacts, structural faults)

Deep learning models are trained on labeled datasets and can significantly reduce interpretation time while improving accuracy.

 

  1. Robotics and Autonomous Platforms

 

In environments that are hazardous or difficult to access, GPR systems are increasingly deployed on:

  • Drones (UAVs)
  • Rovers
  • Autonomous ground vehicles (AGVs)

These platforms use onboard sensors and AI navigation systems to scan large areas with minimal human intervention. This is particularly useful for disaster zones, military applications, or remote geological survey services such as Metroscan.

 

Challenges and Limitations

 

Despite its versatility, GPR has limitations that influence its effectiveness:

  • Signal attenuation in conductive soils (e.g., clay, saline environments)
  • Difficulty distinguishing overlapping reflections
  • Limited depth in high-frequency modes
  • Need for skilled interpretation

 

Ongoing research focuses on overcoming these issues through better signal processing, machine learning, and hybrid systems that combine GPR with other geophysical tools such as magnetometers or seismic sensors.

 

Final Word

 

Ground Penetrating Radar is a sophisticated and continually evolving technology. The integration of high-frequency antennas, advanced signal processing, AI, and positioning systems has greatly expanded its capabilities and applications. From detecting ancient ruins to mapping buried utilities and identifying underground hazards, GPR offers a unique, non-destructive window into the subsurface.

Future innovations are likely to focus on greater automation, deeper penetration, and more user-friendly interfaces, making GPR more accessible and effective across a broader range of industries.

 

Generate videos in Gemini and Whisk with Veo 2

Starting today, Gemini Advanced users can generate and share videos using our state-of-the-art video model, Veo 2. In Gemini, you can now translate text-based prompts into dynamic videos. Google Labs is also making Veo 2 available through Whisk, a generative AI experiment that allows you to create new images using both text and image prompts, and now animate them into videos.

By Angela Sun, Director of Multimodal Platforms, Gemini app & Olivia Sturman, Product Manager, Google Labs

Create videos in Gemini

Veo 2 represents a leap forward in video generation, designed to produce high-resolution, detailed videos with cinematic realism. By better understanding real-world physics and human motion, it delivers fluid character movement, lifelike scenes and finer visual details across diverse subjects and styles. 

To generate videos, select Veo 2 from the model dropdown in Gemini. This feature creates an eight-second video clip at 720p resolution, delivered as an MP4 file in a 16:9 landscape format. There is a monthly limit on how many videos you can create, but we will notify you as you approach them.

Creating videos with Gemini is simple: just describe the scene you want to create — whether it’s a short story, a visual concept, or a specific scene — and Gemini will bring your ideas to life. The more detailed your description, the more control you have over the final video. This opens up a world of fun creative possibilities, letting your imagination go wild to picture unreal combinations, explore varied visual styles from realism to fantasy, or quickly narrate short visual ideas.

 

 

One of the best parts of creating is sharing with others! Sharing your video on mobile is easy: simply tap the share button to quickly upload engaging short videos to platforms like TikTok and YouTube Shorts. 

Video generation is now rolling out to Gemini Advanced subscribers globally on web and mobile, starting today and continuing over the next few weeks. This feature is available in all languages Gemini supports. With video generation in Gemini, you don’t need complicated software, specialized equipment, or any prior experience to get creative. Try it out today at gemini.google.com

How Whisk Animate brings your images to life 

Introduced this December, Whisk helps you create images using both text and image prompts to quickly explore and visualize new ideas. Today, those capabilities are expanding further with Whisk Animate. Launching today for Google One AI Premium subscribers globally, this new feature will allow you to take the images you generate and instantly turn them into vivid eight-second videos with Veo 2. Subscribers can try Whisk Animate today at labs.google/whisk.

How Safety is approached 

We’ve taken important steps to make video generation a safe experience. This includes extensive red teaming and evaluation aimed at preventing the generation of content that violates our policies. Additionally, all videos generated with Veo 2 are marked with SynthID, a digital watermark embedded in each frame, which indicates the videos are AI-generated.

Gemini’s outputs are primarily determined by user prompts and like any generative AI tool, there may be instances where it generates content that some individuals find objectionable. We’ll continue to listen to your feedback through the thumbs up/down buttons and make ongoing improvements. For more details, you can read about our approach on our website

Enjoy making videos in the Gemini app and Whisk as Google One AI Premium subscribers!

Viltrox Announces AF 135mm F1.8 LAB Z Lens: Redefines Z-Mount Flagship-Level Resolution

Viltrox is pleased to announce availability of the AF 135mm F1.8 LAB Z, this full-frame large aperture telephoto autofocus lens is now available in Z-mount. It delivers peak performance from optical excellence to operational design. Viltrox’s self-developed Quad HyperVCM motor technology ensures rapid, precise focusing. The lens achieves astonishing resolution at F1.8 full aperture to faithfully capture sharp, true-to-life detail, with the option of dreamy bokeh – creating stunning portrait photos. Ergonomic lens control buttons and customizable LCD startup animations enhance efficient control and visual personalization.

Viltrox patented HyperVCM motor: Fast, precise, quiet autofocus

Viltrox’s patented Quad HyperVCM motor technology significantly enhances thrust conversion efficiency, enabling faster, quieter and more stable autofocus in video and photo shooting. Compared to traditional STM autofocus motors, HyperVCM focuses 150% faster, and with micron-precision positioning – enabling 100ms switching between the closest and farthest focus points, for professional-grade performance. This lens is also equipped with a dual floating focus system, optimizes sharpness, and captures stunning close-ups at 0.72 meters with 0.25x magnification, for versatile close-up shooting.

Ultimate soft, dreamy bokeh

The superior optical design significantly reduces chromatic aberration in bokeh, eliminating onion ring bokeh, and creating a soft, delicate effect beyond the sharp and clear focal plane. The bokeh of this lens is beautiful and there is a smooth transition between in focus and out of focus. The eleven aperture blades ensure bokeh effects are smooth at the center, with minimal optical vignetting and without noticeable swirly bokeh. Even at wide-open F1.8 aperture, it produces creamy, professional-grade bokeh ideal for portraits, backed by efficient light transmission for crystal-clear low-light images. Even at maximum aperture, this lens creates exquisitely beautiful bokeh for portraiture and other photography. Efficient light transmission ensures clear, pure, and noise-free images, even in low-light shooting environments.

Comprehensive operation, seamless creativity

This lens offers intuitive on-the-fly parameter adjustments for instant, hassle-free shots. Features include a three-stage focus limiter, dual VCM motors for fast, silent focusing, customizable dual Fn buttons, A-B focus switching, remote app control, a multifunction ring/click switch (FE/Z), and seamless auto/manual focus (AF/MF) switching for versatile shooting.

Pricing and availability

Viltrox Official Store: https://geni.us/AF135LABZ_PR

Viltrox Amazon store: https://www.amazon.com/dp/B0F17QZQ6F

MSRP: $899 / £829

More Viltrox news