Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

April 17 2015

February 21 2014

Getting started with V-Rep with Octave on Ubuntu for AMRx

This edX Autonomous Mobile Robots course started last week, and the V-Rep simulator with an octave/Matlab interface is going to be a big part of the optional exercises for the course.  There is a free temporary license available for Matlab, but I don't like installing proprietary binaries on my Linux system, especially temporarily (Linux and Ubuntu really needs a standard way of installing applications into per-user directories that don't require system root).  So I'm trying out the Octave route.

Octave 3.6.2 is available as a standard package for my Ubuntu 12.10 install, but didn't work initially with the AMRx exercise 1 test scripts, so I had to build my own remApi.oct.

Building remApi.oct

Get the mkoct binary for octave:

sudo apt-get install octave-pkg-dev

There is a file in the vrep tar ball to run within octave:


It needs some setup first, which is documented within it:

cd V-REP_PRO_EDU_V3_1_0_64_Linux/programming/remoteApiBindings/octave/
cp ../../remoteApi/* .
cp ../../include/* .
octave:4> buildLin

extApiPlatform.c: In function ‘extApi_readFile’:
extApiPlatform.c:222:8: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result [-Wunused-result]
remApi.cc: In function ‘octave_value_list FsimxAddStatusbarMessage(const octave_value_list&, int)’:
remApi.cc:161:35: warning: ‘octave_value::octave_value(const charNDArray&, bool, char)’ is deprecated (declared at /usr/include/octave-3.6.2/octave/../octave/ov.h:237) [-Wdeprecated-declarations]
remApi.cc: In function ‘octave_value_list FsimxCopyPasteObjects(const octave_value_list&, int)’:
remApi.cc:2834:31: warning: ‘void Array<T>::resize(octave_idx_type) [with T = float; octave_idx_type = int]’ is deprecated (declared at /usr/include/octave-3.6.2/octave/../octave/Array.h:459) [-Wdeprecated-declarations]
remApi.cc: In function ‘octave_value_list FsimxUnpackInts(const octave_value_list&, int)’:
remApi.cc:2856:29: warning: ‘void Array<T>::resize(octave_idx_type) [with T = octave_int<int>; octave_idx_type = int]’ is deprecated (declared at /usr/include/octave-3.6.2/octave/../octave/Array.h:459) [-Wdeprecated-declarations]

octave:4> exit

cp remApi.oct ~/own/edx_amrx/exercise1/code/common/libs/octave/linuxLibrary64Bit

cd ~/own/edx_amrx/exercise1/code/common/vrep


octave:1> test

And it works!  The connection from octave to the running V-REP is established, and then the script commands the simulation to start and then stops soon after.


The edX platform has some annoying quirks I've documented elsewhere.  But other than that it is pretty good.

V-REP is impressive also (coming from using Gazebo a great deal over the past six months), but has an annoying feature where a mouse right click can both rotate the view and open a context menu, the very first option is to close the 3D view window.  So it is very possible especially on a laptop trackpad to accidentally try to rotate but end up closing the window of the view that was to be rotated.

Debugging steps (not part of the solution)

The first thing I tried was to launch vrep.sh from the vrep tar ball, load the exercise 1 ttt scene, and then enter the  exercise1/code/common/vrep/  directory, launch octave, and try to run test.m:

octave:2> conn = simulation_setup();
octave:3> robot_nb=0
robot_nb = 0
octave:4> conn = simulation_openConnection(conn, robot_nb )
error: simulation_openConnection: /home/lwalter/own/edx_amrx/exercise1/code/common/vrep/../libs/octave/linuxLibrary64Bit/remApi.oct: failed to load: liboctinterp.so: cannot open shared object file: No such file or directory
error: called from:
error:   /home/lwalter/own/edx_amrx/exercise1/code/common/vrep/simulation_openConnection.m at line 30, column 28

I have liboctinerp.so.1, but no liboctinterp.so, so in a user directory on my LD_LIBRARY_PATH I added links to it and other libraries that were subsequently not found:

ln -s /usr/lib/x86_64-linux-gnu/liboctinterp.so.1 ~/other/install/lib/liboctinterp.so
ln -s /usr/lib/x86_64-linux-gnu/liboctave.so.1 ~/other/install/lib/liboctave.so
ln -s /usr/lib/x86_64-linux-gnu/libcruft.so.1 ~/other/install/lib/libcruft.so

Update - on a 13.04 Ubuntu I built the remApi.oct first, these steps are unnecessary.

I tried test.m again and ran into this problem:

octave:3> connection = simulation_openConnection(connection, robotNb);
error: simulation_openConnection: /home/lwalter/own/edx_amrx/exercise1/code/common/vrep/../libs/octave/linuxLibrary64Bit/remApi.oct: failed to load: /home/lwalter/own/edx_amrx/exercise1/code/common/vrep/../libs/octave/linuxLibrary64Bit/remApi.oct: undefined symbol: _ZN5ArrayI12octave_valueED0Ev
error: called from:
error:   /home/lwalter/own/edx_amrx/exercise1/code/common/vrep/simulation_openConnection.m at line 30, column 28

I saw some references to being able to rebuild remApi.oct, so set out to do that next.

February 06 2014

Text-to-speech audio books with text image videos for youtube

Down and Out in the Magic Kingdom by Cory Doctorow has a very permissive license for reuse, so I've gone through the steps of making an audio book with images of the text and putting it on youtube:

To do this, the first thing was to download the text from the Cory Doctorow site:

There are some issues with text encoding that I mostly plowed through though I suspect another process for conversion to UTF8 could have worked better.

First thing is to get rid of some ampersand hash forty fives that I think were dashes in vim:


Also need to remove http://en.wikipedia.org/wiki/Specials_(Unicode_block) the U+FFFD unicode characters.


Also replacing tabs with spaces turned out to be necessary.

Imagemagick wouldn't do automatic line breaks for me later in this process (though pango might have worked), so added linebreaks to keep lines under 80 characters was necessary:

fmt ../Cory_Doctorow_-_Down_and_Out_in_the_Magic_Kingdom.txt > ../Cory_Doctorow_-_Down_and_Out_in_the_Magic_Kingdom_line_breaks.txt  

There were still some odd question marks generated by convert in the text, I hand edit to get the worst one out- the one that would have appeared on the title of the book.

Next thing was to split the book at every blank line into roughly 1500 text files which will probably be short enough to show in a single image:

csplit -f down -b '%05d.txt' ../*.txt '/^$/' '{*}'

Next is the conversion of each of the split text files into HD png files

for i in *.txt; 
do convert -background black -fill white -size 1920x1080 -pointsize 45 -gravity center label:"$(<$i)" PNG8:"$i.png"; 

And then generate wave files from each of the 1500 text files:

for i in *txt;
do pico2wave -w $i.wav "$(<$i)"

Videos are then created from putting the png images together with the images, this part is very similar to the process in http://binarymillenium.com/2013/07/turn-set-of-mp3s-into-static-image.html

for i in *.txt; 
do avconv -loop 1 -r 1 -i "$i.png" -c:v libx264 -i "$i.wav" -c:a aac -b:a 32k -strict experimental -shortest "$i.mp4"; 

Some conversions result in 0 length mp4s with this error:
[buffer @ 0x8959e0] Invalid pixel format string '-1' , 
this turned out to be caused by some of the convert png images being 16-bit instead of 8-bit (why wasn't it consistent, most were 8-bit), but putting PNG8: into the convert command line fixed this.

Create a text file listing of all the mp4 files:

rm all_videos.txt 
for i in *mp4; 
do echo $i echo "file '$i'" >> all_videos.txt 

And concatenate all the mp4 files together into one giant 6 hour video with no recompression (only 500MB though):

mkdir output
avconv -f concat -i all_videos.txt -c copy output/down_and_out.mp4

For the first few minutes on youtube it looked like the video was all black instead of showing the titles, but a few minutes later this was fixed.

February 04 2014

Installing Full Desktop ROS Hydro from source on Ubuntu 13.10

Since there aren't any ROS packages for 13.10, I'm did a full catkin source install as specified in http://wiki.ros.org/hydro/Installation/Source. I'm also going to do a full gazebo 2.0 install from source in order to debug http://answers.gazebosim.org/question/5223/setting-projector-pose-vs-enclosing-link-pose/ .

As I understand it the proper use of catkin is to create a catkin workspace for all the standard ROS stuff, build and install it ( ./src/catkin/bin/catkin_make_isolated --install ) and then source the install setup.sh from that install ( source ~/ros_catkin_ws/install_isolated/setup.bash ) and then go on and create a new catkin workspace to actually do development in. Otherwise the build times will be ridiculous if catkin has to traverse 250 packages.


Since the core gazebo isn't a ros package (yet?) it ought to be built separately following the instructions on http://gazebosim.org/wiki/2.0/install .

I ran into this error near the end of the build:

[ 99%] Building CXX object interfaces/player/CMakeFiles/gazebo_player.dir/GazeboDriver.cc.o
In file included from /home/lwalter/other/gazebo_source/gazebo/interfaces/player/GazeboInterface.hh:26:0,
from /home/lwalter/other/gazebo_source/gazebo/interfaces/player/GazeboDriver.cc:25:
/home/lwalter/other/gazebo_source/gazebo/interfaces/player/player.h:22:38: fatal error: libplayercore/playercore.h: No such file or directory
#include <libplayercore/playercore.h>

So install libplayer-dev? No, that is a different player. I had libplayerc3.0-dev and libplayerc++3.0-dev installed already, and the file in question was located in /usr/include/player-3.0/libplayercore/playercore.h but gazebo wasn't seeing it.

I'm sure I could have done this cleaner, but I just hand-edited interfaces/player/CMakeLists.txt:

include_directories( /usr/include/player-3.0 ${SDF_INCLUDE_DIRS} ${PLAYER_INCLUDE_DIRS} ${OPENGL_INCLUDE_DIR} ${OGRE_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS})

I got a lot of these warnings but built 100% (haven't fully tested yet so they may yet cause problems):

/usr/bin/ld: warning: libboost_system.so.1.49.0, needed by /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libsdformat.so, may conflict with libboost_system.so.1.53.0

The post install bashrc instructions are not quite what is on the gazebo install page, I had to do this:

export DEST_DIR=/home/lwalter/other/install
export LD_LIBRARY_PATH=$DEST_DIR/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH
export PATH=$DEST_DIR/bin:$PATH


Something went wrong in the ros libstage package, it never generated a config.h from ros_catkin_ws/src/stage/config.h.in ( https://github.com/rtv/Stage/blob/master/config.h.in ) - possibly this was due to not having the environmental variables pointing at gazebo correctly.

[ 10%] Building CXX object libstage/CMakeFiles/stage.dir/gl.o[ 12%] Building CXX object libstage/CMakeFiles/stage.dir/logentry.o/home/lwalter/other/ros_catkin_ws/src/stage/libstage/file_manager.cc:5:45: fatal error: config.h: No such file or directory #include "config.h" // to get INSTALL_PREFIX ^compilation terminated.[ 14%] make[2]: *** [libstage/CMakeFiles/stage.dir/file_manager.o] Error 1make[2]: *** Waiting for unfinished jobs....Building CXX object libstage/CMakeFiles/stage.dir/model.o/home/lwalter/other/ros_catkin_ws/src/stage/libstage/model.cc:141:45: fatal error: config.h: No such file or directory #include "config.h" // for build-time config ^compilation terminated.make[2]: *** [libstage/CMakeFiles/stage.dir/model.o] Error 1make[1]: *** [libstage/CMakeFiles/stage.dir/all] Error 2make: *** [all] Error 2<== Failed to process package 'stage':
Command '/home/lwalter/other/ros_catkin_ws/install_isolated/env.sh make -j4 -l4' returned non-zero exit status 2
Reproduce this error by running:==> cd /home/lwalter/other/ros_catkin_ws/build_isolated/stage && /home/lwalter/other/ros_catkin_ws/install_isolated/env.sh make -j4 -l4

The really ugly hack solution is to create config.h by hand:

vi /home/lwalter/other/ros_catkin_ws/src/stage/libstage/config.h

#define INSTALL_PREFIX "/home/lwalter/other/install/"
#define PLUGIN_PATH "/home/lwalter/other/install/usr/local/lib"
#define VERSION "3.0.2"
#define PROJECT "Stage"

That much worked, though those values may cause problems later if not correct.

Telling ROS about Gazebo

(I didn't discover the gazebo bashrc instructions were wrong until after going through these steps, they probably aren't necessary)
==> cmake /home/lwalter/other/ros_catkin_ws/src/gazebo_plugins -...
CMake Error at CMakeLists.txt:40 (find_package):
By not providing "Findgazebo.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "gazebo", but CMake did not find one.
Could not find a package configuration file provided by "gazebo" with any of the following names:
Add the installation prefix of "gazebo" to CMAKE_PREFIX_PATH or set "gazebo_DIR" to a directory containing one of the above files. If "gazebo" provides a separate development package or SDK, be sure it has been installed.
-- Configuring incomplete, errors occurred!
<== Failed to process package 'gazebo_plugins':

Command '/home/lwalter/other/ros_catkin_ws/install_isolated/env.sh cmake /home/lwalter/other/ros_catkin_ws/src/gazebo_plugins -DCATKIN_DEVEL_PREFIX=/home/lwalter/other/ros_catkin_ws/devel_isolated/gazebo_plugins -DCMAKE_INSTALL_PREFIX=/home/lwalter/other/ros_catkin_ws/install_isolated' returned non-zero exit status 1
Reproduce this error by running:
==> cd /home/lwalter/other/ros_catkin_ws/build_isolated/gazebo_plugins && /home/lwalter/other/ros_catkin_ws/install_isolated/env.sh cmake /home/lwalter/other/ros_catkin_ws/src/gazebo_plugins -DCATKIN_DEVEL_PREFIX=/home/lwalter/other/ros_catkin_ws/devel_isolated/gazebo_plugins -DCMAKE_INSTALL_PREFIX=/home/lwalter/other/ros_catkin_ws/install_isolated

Command failed, exiting.

It can't find gazebo, so run cmake-gui . in ros_catkin_ws/build_isolated/gazebo_plugins and set gazebo_DIR to



Now it looks like the debian supplied sdfformat is conflicting with the one gazebo built, uninstall and rebuild the ros_caktin_ws

cd /home/lwalter/other/ros_catkin_ws/build_isolated/gazebo_plugins
cmake-gui .

SDFormat_DIR needs to be set to

Have to set the above for several packages.

RVIZ build problems with libshiboken

Linking CXX shared library /home/lwalter/other/ros_catkin_ws/devel_isolated/rviz/lib/libdefault_plugin.so
[ 95%] Built target default_plugin
make: *** [all] Error 2
<== Failed to process package 'rviz':
Command '/home/lwalter/other/ros_catkin_ws/install_isolated/env.sh make -j4 -l4' returned non-zero exit status 2
Reproduce this error by running:
==> cd /home/lwalter/other/ros_catkin_ws/build_isolated/rviz && /home/lwalter/other/ros_catkin_ws/install_isolated/env.sh make -j4 -l4

Investigate this with make VERBOSE=1

type 'QX11EmbedWidget' is specified in typesystem, but not defined. This could potentially lead to compilation errors.
Segmentation fault (core dumped)
make[2]: *** [src/python_bindings/shiboken/librviz_shiboken/librviz_shiboken_module_wrapper.cpp] Error 139
make[2]: Leaving directory `/home/lwalter/other/ros_catkin_ws/build_isolated/rviz'
make[1]: *** [src/python_bindings/shiboken/CMakeFiles/rviz_shiboken.dir/all] Error 2
make[1]: Leaving directory `/home/lwalter/other/ros_catkin_ws/build_isolated/rviz'
make: *** [all] Error 2

There is some discussion of probably the same issue at

The solution seems to be to remove shiboken:

sudo apt-get remove libshiboken-dev

Cmake generates this new warning output:

Add the installation prefix of "GeneratorRunner" to CMAKE_PREFIX_PATH or
set "GeneratorRunner_DIR" to a directory containing one of the above files.
If "GeneratorRunner" provides a separate development package or SDK, be
sure it has been installed.
Call Stack (most recent call first):
src/python_bindings/shiboken/CMakeLists.txt:9 (include)
CMake Warning at /home/lwalter/other/ros_catkin_ws/install_isolated/share/python_qt_binding/cmake/shiboken_helper.cmake:41 (message):
Shiboken binding generator NOT available.
Call Stack (most recent call first):
src/python_bindings/shiboken/CMakeLists.txt:9 (include)
SIP binding generator available.
Python binding generators: sip
Configuring done

But the pacckages all build and install now.


Next try out building the catkin workspace with the projects I'm working on, the first thing missing appears to be the joy package, so clone it and rerun the catkin make install in the main ros catkin ws:

git clone https://github.com/ros-drivers/joystick_drivers.git
sudo apt-get install libusb-dev libspnav-dev

What I don't understand about re-running ./src/catkin/bin/catkin_make_isolated --install is how much stuff has to be re-done even when nothing or very little has changed.  Object files are correctly recognized as already compiled, but something high level gets dirtied and many shared libraries and scripts have to be rerun to presumably generate the exact same output files that were already generated.

October 24 2013

Software Archaeology #1: GPS tagged street video

Around 10 years ago I was working on a number of personal software projects with a mostly common C++ code-base that had a lot of boilerplate OpenGL and vector classes I'd built up from reading the NeHe tutorials.  Some of that work was properly documented and put into source control and madee public, the rest were periodically made into version numbered tarballs.  When I finished or lost interest in developing some graphics technique or physics simulation or anything else I would rename the directory to reflect the new project and start on new functionality: some of old was still useful, some of it had to get ifdeffed out, and some just sat unused.  Some of those were documented but not open-sourced, and a few of those tarballs were archived in my online home directory.  Eventually a lot of the code was superseded by vastly superior open source libraries so it didn't make sense to continue using it, but I would sometimes make backups of the old stuff on DVD and copy them to multiple hard drives as I bought them but with less and less care as time went by.

Fast forward to the present, and reading a section of Planet Google about StreetView, and I started thinking about a particular project where I was driving around Seattle with a DV camera mounted in on the passenger side and a GPS on my roof being logged on a laptop.  I'm pretty sure I was inspired by reading about the Aspen Movie Map from the +Howard Rheingold book Virtual Reality.

Some OpenGL software loaded the images extracted from the video and then displayed them on top of a 3D GPS trajectory.  It worked fine, but I only did it once and took no screenshots or videos and told no more than one or two people about it.  Maybe I thought it was a such a good idea it had to be kept secret until the opportunity to capitalize arose, obviously the opportunity is now long past.  But it it still was fun to have done and having it run again would be cool... but I couldn't find it on any of my still running desktop computers or laptops.  Eventually I found a 250GB Maxtor drive in a shoebox and plugged it in with a usb-to-sata adapter, and there it was: 700 megabytes of video and images all nicely organized along with scripts and source code.  And it compiled: after resolving the SDL dependencies the only thing I had to do was move the ordering  -lGL etc. linker options to be after the listing of object files:    $(CXX) -o $(PROGRAM) $(OBJECTS) $(LIBS) instead of   $(CXX) -o $(PROGRAM)  $(LIBS) $(OBJECTS).  And it ran fine with ./gpsimage --gps ../capture_10_22_2004.txt --bmp biglist.txt, and with some minor modification to the keyboard controls and the resolution I was able to take screenshots and a video:
Ballard surface streets

Ballard surface streets
Exiting the tunnel to get on the viaduct
Driving south on the 99 viaduct looking west


It might be nice to actually check in some of the code to github or something, but for now I'll document the important parts here.

I used dvgrab to extract video from the camera, and converted that to decimated timestamped bmp images.  The text gps log which looks like this:


was converted like this:

  ifstream parts(fileName.c_str());
  if (!parts) {
    OUT("File \"" << fileName << "\" not found.");

  vector3f initialPos;
  string lines;
  while (getline(parts,lines)) {
    //cout << lines << "\n";
    vector<string> tokens = tokenize(lines,",");

    if ((tokens.size() > 0) && (tokens[0] == "$GPGGA") && tokens.size() > 9) {

      float rawTime = atof(tokens[1].c_str());

      int tsec = (int)rawTime%100;
      int tmin = ((int)rawTime/100)%100;
      /// convert to local time
      int thr = (int)rawTime/10000 -7;
      float time =  (float)thr + ((float)tmin+tsec/60.0f)/60.0f;

      vector3f pos = vector3f(10000.0f*atof(tokens[2].c_str())-initialPos[0],
          -10000.0f*atof(tokens[4].c_str())- initialPos[2]

      if (initialPos == vector3f()) {
        initialPos = pos;
        pos = vector3f(0,0,0);

      pair<float,vector3f> tp(time,pos);



(tokenize was a function to split up lines of text, I think the standard C++ libraries didn't do that at the time)

The timestamped bmp files look like this in a directory:


And read in like this:

  ifstream bmpList(bmpListFileName.c_str());
  if (!bmpList) {
    OUT("File \"" << fileName << "\" not found.");

  while (getline(bmpList,lines)) {

    vector<string> tokens = tokenize(lines,".");

    if (tokens.size() > 3) {
      string messyTime = tokens[tokens.size()-2];
      vector<string> items = tokenize(tokenize(messyTime,"-"),"_");

      if (items.size() == 4) {
        //OUT( items[1] << ":" << items[2] << ":" << items[3]);
        float time = atof(items[1].c_str())

        /// arbitrary offset to match gps to images better
        time += .012f;
      } else {
        OUT("list time wrongly formatted " << messyTime);

    } else {
      OUT("list items have wrong format" << lines);

Then brute force O(n^2) the correspondence between image timestamps and gps timestamps:

 /// using the times extracted from the bmp file names, find what the closest
  /// gps coordinates for those times
  for (unsigned i = 0; i < timeImage.size(); i++) {
    for (unsigned j = 0; j < timePos.size()-1; j++) {
      if ((timePos[j].first <= timeImage[i].first)
        && (timePos[j+1].first > timeImage[i].first)) {
        struct tpi newTpi;
        newTpi.time = timeImage[i].first;
        /// interpolate - is this working?
        float factor = (newTpi.time - timePos[j].first)
          / (timePos[j+1].first - timePos[j].first);
        //OUT(i << " " <<j << " " <<factor);  
        newTpi.pos = timePos[j].second
          + (timePos[j+1].second - timePos[j].second) * factor;

        createTexture(newTpi.texture, timeImage[i].second);

        /// don't interpolate just use the same point
        //newTpi.pos = timePos[j].second;

        /// attitude
        vector3f up = vector3f(0,1.0f,0);
        /// this is arbitrary based on the fact the video was shot at a right angle to 
        /// the direction of travel
        vector3f right = (timePos[j+1].second - timePos[j].second);
        right = right/right.Length();

       // make all axes orthogonal
        vector3f out = Cross(up,right);
        up = Cross(right,out);

        // normalize
        out   = out/out.Length();
        up    = up/up.Length();

        /// scale
        if (i >0) {
          newTpi.scale = (newTpi.pos - tpiList[i-1].pos).Length()/2.0f;
        } else {
          newTpi.scale = 5.0f;


And then draw it later:

void gps::draw()
  /// the gps signal
  glColor3f(0.67398f,.459f, 0.459f);
  for (unsigned i = 0; i <timePos.size(); i++) {
    /// subtract first position to make path always start from origin
  glColor3f(0.67398f,.159f, 0.059f);
  for (unsigned i = 0; i <timePos.size(); i++) {
    /// subtract first position to make path always start from origin

  /// interpolated image position
  glColor3f(0.37398f,.659f, 0.459f);
  for (unsigned i = 0; i <tpiList.size(); i++) {
/*  glColor3f(0.17398f,0.559f, 0.859f);
  for (unsigned i = 0; i <tpiList.size(); i++) {



  /// always pointed at camera 
  //matrix16f temp = Registry::instance()->theCamera->location;

  vector3f loc = Registry::instance()->theCamera->location.GetTranslation();

  int oldI = 0;
  for (unsigned i = 0; i <tpiList.size(); i++) {
    float scale = tpiList[i].scale;

    /// simple distance culling
    float dist = (loc - tpiList[i].pos).Length();
    /*if ((dist >= 5000)) {
      /// make far away textures bigger, and show less of them
      float f= dist/5000;
      f =f*f;
      i += (int)f+1;
      scale*= f;
    if ((dist > 3000) && (dist <= 8000)) {
      if (i%5==0) {
        scale *=5;
      } else {
        dist = 20000;
    if (dist > 8000) {
      if (i%10==0) {
        scale *=10;
      } else {
        dist = 20000;
    if (dist < 16000) {
      glBindTexture(GL_TEXTURE_2D, tpiList[i].texture);

      matrix16f temp = tpiList[i].attitude;
      glTexCoord2f(0.0f, 0.0f);
      glTexCoord2f(1.0f, 0.0f);
      glTexCoord2f(1.0f, 1.0f);
      glTexCoord2f(0.0f, 1.0f);

    oldI = i;




A few other old projects could be revived, though some have more obscure dependencies (paragui and maybe another opengl gui).  It's not a high priority but it would be nice to create better records now than wait even longer for more bitrot to set in, and I have a restored interest in low-ish level OpenGL so it would be nice to get refreshed on the stuff I've already done.

July 25 2013

Turn a set of mp3s into static image music videos

I wanted to take a directory full of mp3s, in this case a bunch of Creative Commons Attribution from Kevin MacLeod (http://incompetech.com/music/) and make videos that simply have the artist name and track name, and moreover string many of those videos together into a longer compilation- the Linux bash script to do this follows.

It seems like ffmpeg fails to concatenate after the videos reached an hour in length- I would get a segfault at that point.  The music and video was getting unsynchronized which causes the titles to run longer than the music does, I'll have to look more into that.

Make title image videos from a directory of mp3s:

mkdir output
rm output/*

for i in *mp3;
convert -background black -fill white
-size 1920x1080 -pointsize 80 -gravity center
label:"Kevin Macleod `echo $i | sed s/.mp3//`" output/"$i.png"

# TBD replace with ffmpeg
avconv -loop 1 -r 1 -i output/"$i.png" -c:v libx264 -i "$i" -c:a aac -strict experimental -shortest output/"$i.mp4"


Then concatenate into one long video (thanks to https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join,%20merge)%20media%20files)

rm all_videos.txt
for i in *mp4;
echo $i
echo "file '$i'" >> all_videos.txt

mkdir output
ffmpeg -f concat -i all_videos.txt -c copy output/kevin_macleod_1.mp4

May 24 2013



Draw sound waveforms with a mouse, then play the sounds with keys that vary in pitch.  The frequency and phase spectrum can also be manipulated in the same way.

Mostly I want to create crude chiptunes sound effects which it can do pretty well, I think it needs more layering/modulation capability to be a bit more useful.  Also most of the interesting frequencies are very near the left hand fifth of the frequency plot, an ability to zoom there and on the time waveform would be very useful- maybe doubling or tripling the amount of horizontal resolution devoted to the plots would be nice as well.

The mouse drawing code is pretty crude, it can't even interpolate between two different sampled mouse y positions yet.

I used Processing and the minim sound library which didn't directly support manipulation or viewing of phase information.  The trick was to subclass fft like this:


March 15 2012


November 28 2011

Lunar DTM100 to Blender displacment map

This post is an aggregration of multiple threads created on google plus and additional findings, the scattered nature of those threads made it impossible to find all the information in one place hence this post.   It's also a work in progress with some blank spots- help is welcome!

(I've mostly moved to google plus which is great for mini-blogging (as opposed to the micro blogging of twitter and full size regular blogger blogging) and has very good engagement once you find good people to put in your circles.  I expect greater plus/blogger integration in the future probably starting with comments becoming plusified.)

From LROC Lunar Map

Lunar Elevation Data

Source DTM files are in the IMG files:


The highest resolution maps are in the 100M.IMG files, which means 100 meters/pixel (which seems large for an object so close as the moon- why don't we have 1 meter per pixel, or 0.1 meter yet?).

I haven't written a script for handling the files that don't completely cover the lunar globe, and will probably use other tools to get that right.  (grass gis http://grass.fbk.eu/gdp/index.php?)

TBD detail on file format.

Generating 32-bit elevation tifs with Python

Conversion to viewable

Not a lot of programs can display those 32-bit tifs correctly, and Blender didn't like them.  Get a version of imagemagick with hdri enabled (./configure --enable-hdri when building it from source) so they can be converted to friendlier formats.

Making an easier to view jpg from the 32-bit tifs:

convert -define quantum:scale=255.0 -normalize moon.tif moon_fromtif.jpg

And the following is the result:
From LROC Lunar Map

The jpeg is usable in Blender but doesn't hold up well with a lot of zooming- the 255 levels of elevation possible in a jpeg produce stair step artifacts:
From LROC Lunar Map
Conversion to blender usable

So Blender could use a 16-bit format like openexr for 255x smoother gradations, use this conversion:

convert -define quantum:scale=255.0 moon.tif moon.exr

The quantum scale there seems like it ought to be 65535.0 but the 255.0 works, and imagemagick identify -verbose shows that the range of values is 65535.0.

Setting up image based displacement + bump in Blender 2.6x Cycles (latest svn)

TBD flesh this out in greater detail

Add | Mesh | UV sphere

Object Modifies | Add Modifier | Subdivision Surface | Render 6

Material | Surface | Use Nodes

Shift-A | Texture | Image Texture | Open moon.exr

Connect the color to the diffuse bsdf, then should see texture on sphere if in texture or render view mode (maybe have to do something to force redraw/update).

Edit Mode | Select all edges

Mesh | UV Unwrap | Sphere projection | Align to object

Connect image texture to color to bw converter and then to displacement input on material output.  TBD proper height scaling of craters.

Object Data | Displacement | Method | Both

Link to .blend file:


From LROC Lunar Map
UV Sphere Projection Polar Problems

The UV spheres generated by blender have the problem of having triangles instead of quads around the poles.  The spherical projection will produce distortion at the poles, smoothing the sphere prior to uv unwrap helps minimize it.  I wonder if there is something problematic about making the pole polygons quads because it would involve multiple polygon points and edges right on top of each other.

You can see the problem areas in the uv image below- the nice quad projections become distorted triangles at the top and bottom.

The result is these pinched areas:

Lunar Visual Mosaics

The moon isn't perfectly grey, there are many interesting light and dark features.  I haven't located a good texture generated from LROC or Clementine data (LROC would be ideal since it would probably guarantee all visual features line up with elevation features).



There are some random ones to be found on the web but I haven't tried them yet.


Grass gis http://grass.fbk.eu/gdp

gdal - has python bindings    (ubuntu intall python-gdal gdal-bin)  http://www.gdal.org/
Turn python generated tiff images into geotiffs

osgearth - uses geotiff output from gdal to produce lod/paged terrain databases viewable in OpenSceneGraph osgviewer.

Use 100M data (the 256P IMG files).  Parse dtm within python to do this?  Minimum is extracting width x height.


Original discussions that originated this post:

August 18 2011

July 18 2011

June 30 2011

June 29 2011

Buzz by Lucas Walter from Buzz

I kind of like the idea of a pay-per-drive system to more strongly couple driving with its true cost but this is an interesting alternative:


June 22 2011

Buzz by Lucas Walter from Buzz

I've tried the free gui based image stitching programs but I'd rather be using opencv:


June 16 2011

Buzz by Lucas Walter from Buzz

A revival of Amiga-style computer in a keyboard form factor?


I think there ought to be a smaller cd-drive free version.

June 13 2011

June 06 2011

May 31 2011

Buzz by Lucas Walter from Buzz

Looks as good as the new Sony AR and source is provided:

Augmenting hundreds of photographs with Polyora

May 27 2011

Buzz by Lucas Walter from Buzz

I'm the only person within two and half miles to have signed up for this, but it is very new:

I think it's supposed to be sort of like freecycle but with a modern interface, with the addition of skills as well as material items to be given away/traded/borrowed/sold. They verify addresses via postcard with a code written on it. I like the concept but it seems like a lot of recent startups that are useless until large geographic densities of people sign up.

May 18 2011

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!