Geza

IMG_48642
GEZA 01v.1

I wanted to create an internet connected (“internet of things”) gadget of my own for a long time. So I was very excited when I received the contract to create a system which sends geological data from construction site to ftp server. On construction sites you have to measure a lot of geological variables like subsurface water levels, temperature or gravel movements. All these variables are measured by sensors on site and collected by data logging station. In my case the data logging station and sensors were made by Geokon. Now if you wanted to get the data from the data logger, you had to go on site in person and download data to the computer. The problem with this approach is that it coasts about 150USD to send there someone to do it and since it costs so much, rarely anyone does it. Nature is often unpredictable and you need as fresh geological data as you can get in order to prevent events such as slip rolling over your highway (http://blogs.agu.org/landslideblog/2014/06/09/litochovice-1/).

Device

The device itself consists of raspberry pi A+ (lowest power hungry raspberry), USB modem, power supply and timer. All these parts are inside water resistant box with antenna and serial port on the side. On the raspberry pi  is running normal Debian Linux which is set to start a transfer program right after boot. The program is written in Python. When the program starts it initiates communication with Geokon data logger over serial port and starts downloading the geological data to RAM. Then the program initializes the modem and uploads acquired data to FTP server. The data on the server are saved as csv file which can be read by Geokon software.  From the FTP server anyone that you give an access can read the data. At last the program shuts down the raspberry and wait till the timer cuts off the power.

Geokon data logger
Geokon data logger

Power supply

Main challenge of this project was the required power consumption envelope. The device was expected to send data from data logger to server three times a week for at least three months without any maintenance. Using a big battery was not possible as I had to fit the whole device into small box which would fit into steel tube above a bore. Because of space problems I had to satisfy myself with a hermetic lead acid accumulator 12v/1.2Ah and lower the cover consumption at idle as low as possible.

The measuring showed that the raspberry pi alone in idle (with HDMI turned off) eats 80mA/5v plus the modem adds another 70mA/5v (raspberry pi is not capable of powering down the USB port) and dc-dc convector another 20mA/12v. That is a lot of energy, the first idea I had was to use raspberry pi as timer which obviously cannot work as it would require a car battery for any long term operation. Instead of timing the upload schedule with lets say cron on raspberry pi, I used an external timer from china which cuts off the power to while device. These timers are great since they are very easy to use and anyone can set the uploading schedule. Inside these timers you can find two circuit boards. Top main board with display contains all the logic and the bottom one is providing the power, battery backup load power switching. The original timer had a power consumption of 4mA when the load is switched off. This sounds reasonably until you realize that this timer alone would eat the battery in 12.5 day. I had to remove the bottom board and replace it with my custom circuit which has almost no power consumption at idle. The main board takes unmeasurable amount of current so I simply connected it to AA battery which should last for years. The main board has several wires and one of them has +1.5 V when switched on. I connected this wire to my custom board where it powers a transistor which switches a relay which in turn switches the load. Now with this power system the idle power consumption is almost zero and the battery will rather self discharge then deplete.

Control

Thanks to this device the acquiring of the data from the data logger is very easy. You place your sensors on the site and connect them to Geokon data logger. Then you connect the data logger to my device through a serial cable. Inside the device you setup the timer to schedule the data upload to server. Every time you want the device to upload fresh data you have to set the timer to switch on and after cca 15min to switch off.  The main battery should last for about 50 uploads which gives you a run-time of three months with three uploads a week.

Future

In the future I would like to lower the power consumption by using switch at the modems USB port so I do not have to power it when is not user. But that probably would not have a great impact on run time. Greater impact could be acheaved by using Arduino or similar board but there is a big problem with lack of memory on these devices.

 

If you are interested in this device or even wish to buy it yourself please write me at below.

Android Camera

00922

This blog post was written based on my work done during developing Looking Glass project. The first thing you have to do when you want to create augmented reality application is to get real-time feed of world in front of you. If you have a dedicated device like Epson Moverio you are good to go because there is just pair of semipermeable classes which you can see through.  But if you want to make augmented reality with Google Cardboard or similar device, then you have a big problem. Since you can not see through you phone (or for now at least), you have to utilize you phones camera. Basically you take stream from your phone’s camera, do some lens distortion correction and then show it on the screen of your phone. The phone I use for this experiment is Nexus 5 which has quite some limitations.  The sad thing is that despite many promises from several years ago the camera is still not 3D and can only do 30fps. So I am stuck with 2D camera and 30fps which despite my doubts works quite well for augmented reality app.

Much, much bigger problem is latency. Humans can very easily spot any latency problem, which you can test yourself by playing any fast computer game on not so fast computer.  For virtual reality which which is augmenting the real world in Looking Glass,  the maximum tolerable latency is about 20 milliseconds (for comparison your normal screen refreshes every 16 milliseconds ).  So in this time I have to catch a frame, dig it through about 10 software layers (thanks Android for extra Java overhead grrr ), some hardware, bunch of buffers and then show it on screen to user. With with current state of cell phone technology is mission impossible, but I have tried to find the fastest way.

In order to find the fastest solution I have developed four implementations with identical purpose to show real-time feed from camera on screen. The implementations are done in C++ and Qt framework together with same Java glue. The first two implementation are just for compaction to know what is the base speed of the system. The other two are made to gain maximum performance and I also provide source code for them.

  1. Implementation: Standard implementation with Android API and plain Java. Simply put the the preview from camera to the app layout.
  2. Implementation: Since Qt still does not provide and access to camera through QCamera API I have used Camera from QtQuick.
  3. Implementation: This implementation grabs the frame from camera through the Android’s Camera.PreviewCallback. Frames provided through this API are unfortunately in NV21 colour format, so I copy the whole frame through JNI to C++ where I do the conversion to RGB. Then I upload this RGB image to OpenGL texture which is rendered on screen.
  4. Implementation: Last implementation is the most interesting one. Since I am rendering everything in OpenGL anyway it would be great if I could get the OpenGL texture directly. Well Android has a function just for that. It is an OpenGL extension called OES_EGL_image_external.  This extension creates a texture with type GL_TEXTURE_EXTERNAL_OES which has some special features. But mostly works like any other GL_TEXTURE_2D, you just need to rewrite your fragment shaders to support it. Thankfully Android’s implementation of OpenGL is basically just wrapper over underlying C implementation. This allows me to create the OpenGL texture in NDK with C, pass the texture id through JNI to Java with current OpenGL context and the let the Java to feed the frames to the texture.

Source Code

For the testing I have used GoPro camera with 240fps recording and Nexus 5 with CyanogenMod 11. The phone was pointed on small LED and GoPro was pointed on both LED and phones screen. After each LED turn on, I have counted the number of frames between the event itself and its appearance on the phones screen. The number of frame is written in the table below. From several identical experiments I have counted average number of frames and from that an approximate latency (each frame is cca 4.2ms).

Implementation 1 Implementation 2 Implementation 3 Implementation 4
1. 50 52 33 60
2. 44 57 63 24
3. 24 39 58 25
4. 56 35 32 26
5. 84 31 42 30
Average 51.6 frames 42.8 frames 33.0 frames 45.6 frames
Latency cca 216ms cca 179ms cca 191ms cca 138ms

From these results you can clearly see that today’s phones with Android are rely not good for augmented reality due to massive latency. First implementation clearly shows that Android own implementation of camera preview is not any fast, probably due to Java’s overhead and software image rendering. The speed of implementations 2 and 3 is similar since from reading of Qt’s source code I found that Digia’s implementation does pretty much the same operations as mine does. The last one is the best one I found. I tried to find why but I was not able to dig deep enough through the Android’s source to find actual cause. Because it heavily depends on camera’s drivers which are closed source. For the Looking Glass project I have chosen the implementation number four and even through it is not perfect, it is good enough for the augmented reality. When I get to the laboratory I am planing to re-do this experiment with more samples (current ones vary quite a lot) and also test the Sailfish OS and Ubuntu Phone.

Atlas improvements

It has been quite some time from last release. Lot of thinks has been added to the Atlas but a lot has been removed. First I have dropped support of Qt 4, because Qt 5 is becoming quite mature and because I need QtQuick. Right, QtQuick is the biggest change in Atlas. I have started rewriting whole UI from software rendering based QtWidgets to hardware accelerated QtQuick. The work is quite complex and time consuming (now I understand why no one wants to rewrite those Motif based applications 😀 ). Currently the application is hybrid of both worlds, but it will get better. It is cool to have HW accelerated GUI but more importantly it allows me to have proper support for Android. One of many problems with Android is that it doesn’t allow me to have mixed SW/HW rendering application. Until now it was possible to run Atlas on Android but without UI, which was kinda useless.

I have been working also on other parts of editor. The loading times are down by 50% thanks to code re-factoring and better usage of CPU cache. RAM usage also decreased dramatically. This was achieved by using in memory compression. There were some improvements in rendering which resulted in higher frame rate.