Not as easily as a Pi in the shack and VLC in the house because VLC is already builtHow does the ‘second Pi inside the house’ setup work ?

The Pi in the shack runs software called node.js which (when you code it to do so) creates a web server interface.
That interface is designed to control the Pi in real time using a technology called web-sockets, they send non-audio data back and forth between the Pi and the web page so that my S meter code and nice large "LED" web display code etc updates the web page in real time at the same time as I command my radios, usually an FT-991 Yaesu but sometimes an FT-857D, to scan at the rate about 50ms per channel and control all their other functions via CAT.
The radio control is done at via 38400 baud direct USB connection to the FT-991 or via a standard serial interface to SDRUno running on another machine. SDRUno also has CAT control using a subset of the Kenwood CAT command set.
The audio of the radio or SDRUno or whatever is connected to the Pi via a USB-to-audio dongle and is sent by the Pi using a code library called FFMPeg which has to be compiled on the Pi along with CODECs of choice like Opus which offers excellent high speed compression for voice, or simply using PCM which is just sliced audio blocks.
The technology to play live audio in real time in a web browser is effectively non existent, unless you code it yourself or use a Javascript library. It's my personal holy-grail to get real time web-browser audio going properly. But the audio stream in PCM or MP3 using an RTP stream can be read by VLC and by a small test app from FFMPeg called FFPlay on a Mac if you pass it the right information in an .sdp file. FFPlay can only be invoked via a terminal program or a bash command, but that can be setup to run nicely on a Mac using their new inbuilt version of Javascript scripting
If I choose to use a web browser to play the sound in, then I have to program the Pi to stream in AAC format. That involves the Pi breaking the audio stream into chunks and storing those chunks in (say) ten x 1 second files on disk, along with a .m3u header file which describes their contents/size. That means a 10 second delay which is the shortest I can get AAC working. The Pi/node web server then serves those little files in such succession that they are integrated in the browser back into an audible audio stream.
All of the programming of all the above has to be done personally by hand.
The only reason I've gone into detail is to give an indication of what is required so at least you know whether you might contemplate it or not.
The other system mentioned by SDRPlay Support called openwebrx is truly amazing from my perspective, a single programmer solved a whole bunch of those problems, streams his own designed compressed audio and waterfall information, drawing the waterfall a line at a time using SVG graphics (from memory).
I only wish his project would be taken up and supported sufficiently to take it to the next level, it does exactly what people on this board seem to be crying out for, as I am. Coding is so specialised, I cannot do it. I started in assembler and then Forth, did a bit of C++ when it was first invented and then got stuck in industry doing web-internet front end, middleware and server and have next to no experience in the necessary areas, whereas large parts of the SDRPlay development are directly relevant, so they could do it

Perhaps if enough of us asked nicely?
Regards, Phil