the visualiztion technique is called ‘Volume Rendering’, the wikipedia article has quite some information on it:
Also talks about optimizations etc. The only two techniques of which I know how they work are
slices based approach, where you render a stack of semi-transparent textures oriented towards the user, slicing through the volume data and mapping the scalar field value to texture color with what is called a lookup-table (LUT)
raycasting based approach, where you sample the volume data along sight rays originating in your camera center and going through each pixel on your projection place. This approach I think has more potential for optimization since you might terminate your rays early, can be better parallelized using fragment shaders for the per-pixel compuation, allows e.g. supersampling/antialiasing and can be dynamically adapted for a quality / performance tradeoff
You might conduct additional research about more recent state of the art.