Multispectral Imaging Networked Capture Controller

Todd Hanneken, May 2025

The premise is that controlling a camera and lights is the work of a device, not Windows software. The device is accessed over a network, either wired or wireless. The astrophotography community independently identified the same solution. Users buy and connect the device, then do all the work on their own computers. Most users will not really know what processor, operating system, and software are doing magic inside the box. Most tasks will be done in a web browser, including a web browser on any desktop or mobile operating system (iOS, macOS, Windows, Android, Linux). An open question is whether some tasks will still require a terminal emulator and willingness to use a Linux shell and text editor. Connecting a mouse, keyboard, and monitor is not intended.

Cameras

Known compatible cameras include QHY and Canon. ZWO libraries exist but have not been tested. The QHY PCI fiberoptic card has not been tested. Todd is not aware of the underlying requirements of the E7. Any shared library (driver) that works on Linux can be made to work. In the unlikely event that libraries exist for the x86 architecture but not the Arm architecture, hardware requirements will change.

Lights

Known compatible lights are those controlled by Arduino and Raspberry Pi Pico, including USB and Bluetooth options for the Octopus lights. The Misha light controller code on GitHub has been incorporated and will be tested June 2025. The code from 2023 to control a Microchip microcontroller was successful with its own lights but not the MegaVision Ocho board. The solution may be as simple as identifying the correct SPI codes.

Data Storage

Data moves from the camera to the controller via USB or Ethernet. Once on the controller, data can be stored in three ways.

  1. Data can be stored internally. From there, data can be shared on the network immediately or later. Network shares can be SMB (Windows), NFS (Linux), or a web interface such as Apache directory index.
  2. Data can be stored directly to a network share. For large projects, this would be a beneficial way to move data to a more powerful computer that can process data and interact with more humans without risk of impacting the capture.
  3. Data can be stored to a removable storage device.

Hardware

The hardware requirements are scalable, starting at a Raspberry Pi for less than $100. The original intent is that the capture controller does only capture and the most basic quality control, then sends the data to other machines for further processing. In that case, processing power and RAM are not concerns for a 4GB Raspberry Pi (tested up to 62 megapixels 16-bit data). The main reason to upgrade would be ports.

A second Ethernet port would be necessary if one wanted to connect to a wired network and an Ethernet-connected camera at the same time. WiFi is built into the Raspberry Pi.

The Raspberry Pi has two USB 3.1 ports. A camera is likely to occupy one. An external storage device or second camera (but not both) could occupy the other.

The Raspberry Pi has two USB 2.0 ports. Each non-Bluetooth Octopus light requires a USB port. It is not problematic to use an external USB hub for low-speed traffic.

The Raspberry Pi has an SD storage card port. Writing image data to the SD card (if not a network share) is a speed bottleneck. The Raspberry Pi has the ability to connect an NVMe solid state storage device over a PCI bus and add-on board (hat), but the speed is not the fastest available for NVMe drives. If one wanted the fastest possible internal storage, then a more serious motherboard would be in order.

Also, if one wanted to connect the QHY PCIe “Graber” card for fiber-optic cable connections, the motherboard requires PCIe 2.0 x8. Fiber optic is necessary if one needs to run 50m from a camera to a controller. Because the network controller is small and controlled remotely, it can be placed very close to the camera. Todd is skeptical that the real-world speed boost from USB 3.1 to fiber Ethernet will be noticeable enough to change a project, or worth the list of things that can go wrong.

Connecting

There are a variety of typologies for connecting a networked capture controller that has both WiFi and a wired Ethernet port. The method that assumes complete independence is for the controller to use the onboard WiFi radio to act as a WiFi access point, to which the operator can connect from a laptop or phone. If a WiFi or wired network are available (on-site or part of the kit), the controller can connect to that network. If wired, the address of the controller can be reserved in advance or discovered by scanning the subnet or connecting temporarily to the onboard WiFi. If wireless, SSID and password can be programmed in advance or using the temporary onboard WiFi.

User Interface

The biggest area for development as of May 2025 is user interface, at least if command-line is not tolerable. After terminal emulator, the interface Todd finds most tolerable is a web interface, which can be accessed through a browser on a phone, tablet, laptop, or desktop. If someone wants to develop apps for the Apple App Store, Google Play Store, or similar, that is outside the scope of Todd’s interests.

Live View

The only task for which a web interface is already functioning is live view for focus. Live view features full frame at reduced resolution and small region at full resolution. The QHY cameras support up to 4x4 pixel binning, so only a sixteenth of the data needs to be transferred to the controller, which optionally can scale further before sending to the operator. This mode also reduces exposure time and increases frame rate. The QHY cameras also support specific regions of interest, so only the data from that region is transferred to the controller.

Shoot

The basic command to start a capture sequence is simple and could be easily triggered from a web interface. Dropdowns could be used to select the hardware configuration and shot list, which should not change much during a project. A textbox would be necessary to type a name for the target. The design question is what information needs to be presented to the operator after starting the sequence. Options include thumbnail of the captured image, detail of the captured image, histogram of the captured image, or basic stats such as count of pixels above a certain threshold value. Besides a conventional shot list, the idea of the shotlist could be adapted for a single shot, light test, etc.

Edit Shotlist

Todd is a fan of editing text files because regular expressions and vim exist. Part of a shotlist text file might read:

uv385-NoFilter-100ms
uv385-WrattenBlue98-125ms
uv385-WrattenGreen61-50ms
uv385-WrattenRed25-40ms
uv385-WrattenInfrared87-500ms
uv385-WrattenInfrared87C-1500ms
RakingLeft-NoFilter-100ms
RakingRight-NoFilter-100ms

The basic format is light-filter-exposure Instructions could also be given to check the chip temperature and wait for the cooler to catch up, if necessary. Web interface options could include a large text box, a series of small text boxes for each shot or each shot parameter, or a screen full of dropdown menus. If dropdown menus, it should be anticipated what to do when options change. Should shots be draggable to change sequence?

Edit System Profile

The current system profile format is a text file.

sensor: QHY600
lens: Milvus50
aperture: F2
cool: -15
gain: 26
basepath: /home/thanneken/Pictures
lights: OctopusBluetooth

Some of that information is prescriptive and some descriptive, depending on the system. Similar design questions apply here. Do we trust users to type sensible values and then validate after submitted? Do we create drop down menus of advisable values?

Everything Else

The major functions are certainly doable in a web interface. Processing (including dark subtraction, flattening, color) are not intended for the capture controller. Many routers demonstrate that many things can be done in a web interface. It would take a lot to anticipate everything that one would want to do. Partial list:

  1. Change settings for connecting to a WiFi network
  2. Pull software updates from GitHub
  3. Update Linux packages
  4. Mount and unmount external storage devices and network shares
  5. Modify Python code without having committed it to GitHub. For example, the look up tables to translate the name of a light or filter to a port should be editable on a per-user basis.

Maybe the list is doable if finite and we can successfully anticipate every needed action.

Software

The software that runs on the controller is available on GitHub. As of June 2025, development efforts are focused on compatibility with all the components to be tested in the Multispectral Imaging Component Rodeo. Documentation and user experience improvements will come later.