This article describes how to get an HTTP-based Raspberry Pi (RPI) camera setup and running from almost scratch hosted directly on the pi itself.

Two different approaches will be used, both using Node.js on the backend:
  1. Clients can fetch a new picture from host/pi.jpg. This is trivial to get up and running using only Node.js' builtin modules as well as raspistill for taking pictures from RPI camera module. A minimal HTML page with few lines of JS will enable a slowly refreshing views of the camera.
  2. Creating a websocket and streaming streams. For this, on the server side I'll use Node.js with websocket module ws, and ffmpeg for capturing MJPEG frames. Depending on the quality and settings, you can get a almost real-time streaming that's compatible with any decent browser like Firefox and Chrome. The caveat is that it will be more CPU intensive and have lower quality.

Requirements

There are a few tools needed installed on the RPI.

  1. Get the RPI up and running with the camera module installed. Make sure you have raspistill installed on your RPI and that it is functional.
  2. For actual streaming, the video stream will be grabbed from /dev/videoX. If the virtual device doesn't show up, load the bcm2835-v4l2 device driver module.
    sudo modprobe bcm2835-v4l2
    
  3. Install Node.js. As of this writing, latest version of Raspbian has an ancient version of Node.js (0.3.2) in its default repositories. Grab the source and build latest Node.js manually from source. I'll only use built-in packages in Node 6.10.3 for the picture approach, and I'll use the ws for streaming.

Still-Image Based Approach

Server Setup

The server implementation is minimal. On the server (RPI), two paths are served:

/pi.jpg
Take and serve latest picture from RPI to the client
/
Serve a static page containing only the image, and a client-side JS script to periodically update it

I'll get to the index.html in a bit.

Server implementation is minimal:

"user strict";

const http = require('http'),
      url = require('url'),
      fs = require('fs');

const server = http.createServer(function(req, res){
    req.requrl = url.parse(req.url, true);
    var path = req.requrl.pathname;
    if (path == '/'){
        var index = fs.readFileSync('index.html');
        res.writeHead(200, {'Content-Type': 'text/html'});
        res.write(index);
        res.end();
    }
    else if (path == '/pi.jpg'){
        serveImage(res);
    }
    else{
        res.writeHead(404, {'Content-Type': 'text/plain'});
        res.write('Page not found');
        res.end();
    }
});

server.listen(8080, function(){
    console.log('Listening on 8080');
});

The gist of serveImage(res) is to take a recent picture from the camera (or give the client the latest available image in case another client is using the camera). Its definition is given further below.

RPI functionality (just raspistill for the time being) will live in its own module inside rpi.js:

'use strict';

const spawn = require('child_process').spawn;

/*! Return a promise containing raspistill output
 *
 * @args arguments to pass to raspistill, space separated
 */
exports.raspistill = function(args){
    // flags must be passed as an array to spawn
    if (typeof args == 'string'){
        args = args.split(' ');
    }
    return new Promise(function(resolve, reject){
        var raspistill = spawn('raspistill', args);
        var ret = [];
        raspistill.stdout.on('data', function(data){
            ret.push(data);
        });

        raspistill.stdout.on('close', function(code){
            resolve(new Buffer.concat(ret));
        });

        raspistill.stdout.on('error', function(err){
            reject(err);
        });
    });
};

The raspistill(...) function will call system's raspistill executable and return us the results as a promise. I'm using spawn over exec here because raspistill may (will) return large buffers. See the docs for exec and spawn.

Side note. A child_process' has a fixed pipe size and large unhandled data (e.g. stderr) will be buffered into a backpressure and can eventually lead to behavior that seems unexpected.

raspistill (system executable) can be instructed to write images to disk and read the result from disk and serve them to the clients, but that's too inefficient and may even reduces the life of the disk. Instead, the -o - flag can be included in raspistill as an argument to have pictures to be written to stdout to skip an extra I/O.

Since multiple clients can ask for an image at the same time, and the camera is physically capable of taking one picture at a time, some sort of a lock needs to be placed on raspistill. The serveImage(res) function incorporates some simple implementation of this:

const rpi = require('./rpi');

var imageCache = new Buffer('');
var piLock = false;

function serveImage(res){
    res.writeHead(200, {
        'Content-Type': 'image/png'
    });
    // If camera is already in use, use cached image
    if (piLock){
        res.write(imageCache);
        res.end();
    }
    else{
        piLock = true;
        rpi.raspistill('-vf -hf -w 640 -h 480 -rot 90 -t 800 -o -')
            .then(function(data){
                imageCache = new Buffer(data.length);
                data.copy(imageCache);
                res.write(data);
                res.end();
                piLock = false;
            })
            .catch(function(err){
                res.end();
                piLock = false;
            });
    }
};

The arguments above are

-vf -hf
Flip images vertically and horizontally, respectively. Depending on your camera's orientation, these may be unnecessary.
-rot 90
Rotate the image by 90 degrees. This again may not be necessary depending on the orientation of your camera
-t 800
Take an image after 800 ms. I have found this to generally take good images when its sunny.
-o -
Write output to stdout which will be retuned to us by our rpi.raspistill function

Note that spawn expects all flags as a list. Our raspistill(...) function takes care of that itself as needed.

Finally, a static HTML page that shows the image is served by the server. Here, the image is bound to /pi.jpg on the server. The page will be ever so simple: it contains only an img element. But to make the image dynamic, the client is set to send periodic AJAX requests to the server to replace the current image with the latest fetched from the server. The requests for new image need to be delayed at least by 800ms timeout since the timeout on the raspistill was set to be 800ms.

The perhaps hacky trick is to append a query to the image url to trick the browser into not using a cached image.

<!doctype html>
<html>
  <head>
    <style>
      body{margin:0; padding: 0;}
      #dimage{width: 100%; height: 100%;}
    </style>
  </head>
  <body>
    <img id="dimage" src="/pi.jpg">
    <script>
      var dom = document.getElementById('dimage');
      var updating = false;
      function updateImage(){
        if (!updating){
          updating = true
          var req = new XMLHttpRequest();
          var r = parseInt(Math.random()*100);
          req.open('GET', '/pi.jpg?r=' + r, true);
          req.responseType = 'arraybuffer';
          req.onload = function(e){
            var arr = new Uint8Array(this.response);
            var raw = String.fromCharCode.apply(null, arr);
            dom.src = "data:image/jpeg;base64," + btoa(raw);
            updating = false;
          }
          req.send();
        }
      }
      // Reload image every 1500 ms
      setInterval(updateImage, 1500);
    </script>
  </body>
</html>

In the snippet above, the random number is in 0-100 range to try to minimize number of collisions between random number in a given short period of time. Even when there is a collision, depending on client browser, the image will probably still be reloaded, at least this is the case on Firefox 53.0.

Streaming Approach

The streaming approach requires a bit more umpf. The clients are connected to a websocket, and have the server send them MJPEG images as quickly as wanted or can as quickly they can render them. For a client using a web-brower, the image can be directly rendered on a canvas dom element. Before getting started, make sure:

Instead of rolling our my websocket server (see e.g. writing Websocket servers ) I cheated and used the ws module. Other modules like socket.io are way too bloated for this use case.

As a side note, if you've ever wondered how you might be able to implement a minimal websocket server, it would go something like this:

const net = require('net');

var WebSocket = net.createServer(function(socket){
    // Receiving data from client
    socket.on('data', function(data){
        // If TCP has already been upgraded to webocket
        if (socket.wsEstablished == true){
            var dataFrame = readDataFrame(data);
            console.log(dataFrame);
        }
        else{ // Websocket hasn't been upgraded yet
            // Get HTTP headers, if any
            var headers = parseHeader(data);
            // If requesting an upgrade
            if ( (headers.upgrade == 'websocket') && (headers.method == 'GET')){
                // Send upgrade request
                if (handshake(socket, headers)){
                    socket.wsEstablished = true;
                    console.log('established');
                }
            }
            // Accept only websocket connections and reject failed handshake
            if (!socket.wsEstablished){
                socket.write('Invalid request - Terminating connection.\n');
                socket.end();
                console.log('> Logging off ' + socket.name);
            }
        }
    });
    socket.on('end', function(){
        process.stdout.write('client closed connection.\n');
    });
});

// Start websocket server
WebSocket.listen(8080, function(){
    console.log('Server listening on port 8080');
});

Details of handshake(...) and readDataFrame(...) can be found in here. You can of use the built-in http module as well if you want to process HTTP requests:

const http = require('http');

const server = http.createServer(function(req, res){
    console.log('createServer(req, res)');
    // Default HTTP response
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('ok');
});

// Accept websocket upgrade requests
server.on('upgrade', function(req, socket, head){
    console.log('ON UPGRADE');

    // Get request headers
    var headers = req.headers;

    // If requesting an upgrade to websocketp
    if (headers.upgrade == 'websocket'){
        // Accept upgrade request
        var response = handshakeResponse(headers['sec-websocket-key']);
        socket.write(response);
    }
    // Accept only websocket connections and reject failed handshake
    else{
        socket.write('Invalid request - Terminating connection.\n');
        socket.end();
        console.log('> Logging off ' + socket.name);
        return;
    }

    // Handle incoming connection for established websockets
    socket.on('data', function(data){
        if (socket.wsEstablished == true){
            var dataFrame = readDataFrame(data);
            console.log(dataFrame);
        }
    });

    socket.on('end', function(){
        process.stdout.write('client closed connection.\n');
    });
});

server.listen(8080, function(){
    console.log('Listening on 8080');
});

Back to RPI. The rpi.js module is augmented to include a function to open a websocket stream and start ffmpeg as a child process and send its data to attached clients. If all clients disconnect, the child process is stopped so the CPU is not unecessarily ran up.

// rpi.js

const WebSocket = require('ws');

exports.raspistill = function(/*...*/){
    // ...
};


/* Start a streaming websocket on 'server' on path 'path' */
exports.openSocketStream = function(server, path){
    var wss = new WebSocket.Server({server: server, path: path});
    var FFMPEG = '';

    // Function to broadcast to all clients
    wss.broadcast = function(data){
        wss.clients.forEach(function (client){
            if (client.readyState === WebSocket.OPEN){
                client.send(data);
            }
        });
    };

    wss.on('connection', function(socket, req){
        // Start an FFMPEG child process if it doesn't exist
        if (FFMPEG == ''){
            FFMPEG = spawn('ffmpeg',[
                '-f', 'video4linux2', '-i', '/dev/video0', '-f', 'mjpeg',
                '-r', '10', '-s', '320x288', '-g', '0', '-b', '800000',
                '-preset', 'ultrafast', 'pipe:1']);

            FFMPEG.stdout.on('data', function(data){
                // Return the buffer as base64 encoded image
                var img = new Buffer(data).toString('base64');
                wss.broadcast(img);
            });

            FFMPEG.on('close', function (code) {
                console.log('ffmpeg exited: ' + code);
            });

            FFMPEG.on('error', function (err) {
                throw err;
            });
        }

        // Stop ffmpeg if no one else listening
        socket.on('close', function(){
            if (wss.clients.size == 0){
                FFMPEG.kill();
                FFMPEG = '';
            }
        });
    });

    return wss;
};

With these settings, when ffmpeg is running, CPU ramps up to about 30%. Also, the lighting has to be right; otherwise the stream will be too dark or too light. I couldn't figure out how to have a consistently good lighting for the stream.

To start the socket server, call this function in the main server.js
// server.js

// Load modules as before

const server = http.createServer(function(req, res){
    // Other paths are as before
    // ...

    // Streaming page for web-browsers
    if (path == '/stream'){
        var c = fs.readFileSync('stream.html');
        res.writeHead(200, {'Content-Type': 'text/html'});
        res.end(c);
    }
});

// Listen on 'example.com/stream'
const ws = rpi.openSocketStream(server, '/stream');

// As before
server.listen(8080, function(){ /* ... */ });

The paths for the HTML page and the websocket happened to have been named the same – they don't need to be.

Finally, in the static page, open a websocket connection to the server (WebSockets are builtin in browsers these days, see e.g. Firefox), listen for incoming data, and render the data as images directly on a canvas element

<!-- stream.html -->
<!doctype html>
<html>
  <head>
    <style>
      #canvasStream{
        width: 100%;
        height: 100%;
        transform: rotate(-90deg);
      }
      body{
        padding: 0;
        margin: 0
      }
    </style>
  </head>
  <body>
    <div id="container">
      <canvas id="canvasStream"></canvas>
    </div>
    <script>
      var loc = window.location, wsuri;
      loc.protocol === "https:" ? wsUri = "wss:" : wsUri = "ws:";
      // in server implementation, websocket address is same as this page's uri
      wsUri += "//" + loc.host + loc.pathname
      var canvas = document.getElementById('canvasStream);
      var canvasContext = canvas.getContext('2d');
      var ws = new WebSocket(wsUri);
      ws.onmessage = function(data){
          try{
              var img = new Image();
              img.src = 'data:image/jpeg;base64,' + data.data;
              img.onload = function(){
                  canvasContext.height = img.height;
                  canvasContext.width = img.width;
                  // scale image as necessary
                  canvasContext.drawImage(img, 0,0, canvasContext.width, canvasContext.height,
                                          0,0, canvas.width, canvas.height);
              };
          }
          catch(err){
          }
      };
    </script>
  </body>
</html>

Closing Thoughts

Just by half an hour of hacking, a working cheap RPI camera server hosted directly on the Pi can be created. If the server is going to be exposed outside LAN, you will probably want to have an authentication system setup so not everyone can access the feed.

As an extension, I made a small and quick viewer app to display the camera on my device (with either option to stream or refresh images) with some minimal authentication in place.

In the video, the camera is server is hosted on the local LAN and the Android app is ran in an Android emulator and connects to the local server. The ws:// prefix tells the app to directly connect to the websocket stream.

In this video, the RPI camera is pointed to my physical monitor which shows the local time using something along the lines of

while true; do echo -ne "\r$(date +"%T.%N")"; done

with milliseconds precision to demo the streaming and drawing rate with this approach.