How to make glowing buttons in VR
When developing experiences for virtual reality (VR), it can sometimes be difficult to guide your player to complete the actions that you want them to do. We aren’t able to simply move the camera to the next point of interest — like you might in a traditional videogame — and UI overlays can often be jarring and break the immersion for the player. While playtesting for one of Magnopus UK’s VR experiences, we found that even prompting the player to press a simple button proved difficult for first-time VR users.
Our initial button implementation used a simple emissive colour property to make it light up, but some of our users were reluctant to press the button. Before resorting to more heavy-handed techniques like UI overlays, we felt that one way to add more agency to the button was to make it feel tactile and more inviting to press — by giving it a rubberised feel and having an LED bulb that would sit beneath the surface — something that would appeal to peoples’ senses.
Buttons can be fun to push in the real-world, but without proper haptic feedback, the best we can do is to overload our visual and auditory senses to compensate and let the player fill in the gaps.
This article takes a look at how we used a simple shader trick to simulate a light that sits beneath the surface of geometry, that is convincingly refracted in stereoscopic 3D.
Considerations
As our button was to be used in VR, it was important that the effect rendered stereoscopically. Additionally, the shader needed to be performant and work with UE4’s Forward+ Renderer — which would prevent us from using more physically accurate approaches (such as subsurface scattering), and raised performance concerns about using translucent materials, refraction, and real-time dynamic lights.
However, we were confident that we could approximate these effects entirely in the pixel shader, without the need for any additional graphics buffers, which would allow us to use an opaque blend mode (the cheapest blend mode!).
The following is a tutorial to replicate and implement our solution for rendering this effect in your own projects!
Drawing a 3D sphere in the pixel shader
The first challenge is to render a sphere inside our button, in order to represent the spherical glow from our LED bulb — but before we tackle this, let’s go over how we can draw a circle.
A circle is one of the easiest shapes to draw mathematically, as it can be defined by a single value — its radius. The simplest way to draw a circle in the pixel shader is to test each pixel’s distance from the origin of the circle. If the distance is less than or equal to the radius value, then we should draw the circle for this pixel. If the distance is greater than the radius value, then we should not draw the circle.
Example of drawing a circle in the pixel shader.
To draw a radial gradient instead, we divide the distance by our radius and output the clamped 0–1 values.
A 3D sphere can be described in the same way as a 2D circle — by a radius — and can also be rendered as querying the distance to a point. For rendering a true 3D sphere, we could use a technique called ‘raymarching’, which iteratively tests the distance between the camera and a surface — but this would be overkill for what we need. For our use case, the 2D radial gradient we looked at earlier would suffice if we were to imagine it as a camera-facing sprite — as a sphere would look the same from any vantage point.
Our 2D circle was conceived in UV space, but we want our ‘3D’ circle to appear in ‘world space’ — appearing inside our button’s geometry. To do this, we can draw inspiration from an existing material function in UE4 called ‘SphereGradient-3D’.
This material function computes two key elements:
The first is the distance between the camera and the sphere. This is just the vector length of our camera’s position and the origin of our sphere.
The second is the length of each pixel to the sphere’s centre plane. This is computed by calculating the vector (with its magnitude) from each pixel to the origin of the sphere and finding the dot product of this with the candidate pixel’s direction vector towards the camera.
Once these two elements are calculated, we have enough information in the pixel shader to draw a circle in this view space using the Pythagorean Theorem.
We can then scale this by our radius value, to generate the linear radial gradient that appears to exist in 3D coordinate space.
.By doing all of this work in the pixel shader and only sampling the data we need, we are in effect treating our button geometry as a 2D canvas to draw our shader onto. As a result, we can simulate the effect of something being drawn from within the geometry.
Although you can use the SphereGradient-3D material function straight out of the box to achieve this, you may find it worthwhile building out the logic yourself to get a good understanding of the technique, as this can help inform decisions you make later on in the development of the shader.
For our button, we pass in the object’s world position as the origin for our sphere, summed with a small vertical offset, as we want our light to appear as though it sits just below the object’s pivot point.
Now that we have a spherical gradient, we can start using this data to drive the look of our glowing button!
Styling the glow effect
The falloff for our light source does not look very convincing, so we will want to add some controls for attenuating the falloff. One way to do this is to use a smooth step function, which evaluates a value (such as our normalized, linear falloff) and returns a value from a mathematically described sigmoid function, or ‘S’ curve.
We can then adjust the min/max values of the function to adjust the falloff for our button’s glow. Min/max values that are closer together result in a tighter falloff, whereas values that are further apart result in a softer falloff.
We should also take this opportunity to add some colour to our light. We can make a vector parameter to define our colour, and multiply this with our light falloff. Next, we add a scalar parameter to use as a multiplier for this colour, so that we can overdrive its values so it will be picked up in the bloom pass.
Modulating the point light with surface information
We now have what looks to be a light, but it doesn’t currently appear to be sat inside of our button’s geometry. In order to make this effect convincing, we want the light to appear as though it is being refracted and is bouncing about in different directions under the surface.
To do this, we need to query the model’s vertex normal information and use this to offset the origin of our sphere gradient for each pixel. This gives the impression that the light is being refracted, or magnified underneath the surface, and really helps to sell the physicality of the light source and the button.
If you have a normal map for your button, you can also throw this into the shader as additional information to perturb the light. As we are already doing all the work in the pixel shader, this only costs us an additional texture lookup (which you may already be using for the PBR normal output, anyway).
We can also sample an AO map to attenuate the brightness of the button, which helps us to further communicate that this light source is beneath the surface.
And there we have it — finally, a button that looks good enough to touch!
Further use cases
Although this example used a button, this effect scales well to any scenario where you want to simulate the effects of a point light beneath the surface of something — like paper lanterns, or even E.T. ‘s index finger.
You can also use data serialization techniques to encode the locations of your interior point lights into the mesh geometry. This can be extremely useful when you want to draw multiple point lights inside the same geometry, such as a device with multiple back-lit buttons, or maybe a Christmas tree with a string of coloured lights. You could also encode additional data, such as the light radius, falloff, colour, distortion amount, and so on.
Be mindful that although this is a single mesh draw call, the shader is applied to the whole geometry, and as such, we incur the cost for resolving the interior point light for each pixel on the mesh. Therefore, you may wish to consider splitting your geometry into two parts — one that utilizes the shader, and one that doesn’t, depending on whether you are pixel bound or draw call bound.
You could also consider using a thickness map and using this to attenuate the falloff/brightness of your light source to give more accurate results.
In summary, conceptualising your geometry as a canvas for your shader, which you then can additively apply arbitrary data to, can be a really powerful tool and can help you to think outside the box… or in this case — inside the button.