Here's a quick outline of the process I went through to build this:
Shooting — Using the annotated satellite pictures and my camera (a Canon Powershot A530), I spent several weekends walking around campus, eventually taking close to 8,000 pictures. I had to shrink their resolution significantly (which is why you can't zoom, unfortunately) so that storing and backing up all those pics wouldn't be too much of a hassle. I picked weekends because campus was deserted then, otherwise I would've had to wait for people to walk out of frame all the time.
I didn't have time to cover the whole campus, unfortunately. I did this in my last weeks in Saarbrücken, and I ran out of time. That's why the whole area where the Computer Science buildings are is missing, for instance, and coverage around DFKI is sketchy.
One such app help with ordering the pictures correctly and grouping together those taken on the same spot:
Another was for matching up fixed features that were visible both on the picture and on the satellite map (e.g. manholes, streetlamps). By gathering enough of these, and also pinpointing on the map the exact point where I was standing, it became possible to estimate which angle each picture is facing. This step represented the bulk of the manual geotagging work.
Incidentally, this has left me with a data set of about 12,000 manually selected data points, each mapping a particular pixel in one of the photos to a particular latitude/longitude (as taken from Google's satellite images — the imagery could be off by a few meters, though that would be consistent across all data points). If anyone has any use for such data, drop me a line, I'm happy to share.
One problem I encountered was that Google's imagery was annoyingly up-to-date. The campus has changed in several places in the past years, and I wanted the old imagery, the one that matched my 2008 pictures. I managed to get that by fiddling with the tile URLs, but eventually they took the old tiles offline completely. Fortunately I happened to have somewhere a huge PNG file with a single satellite picture of the whole campus, which used Google's old imagery (it was a poster I'd made for my room back in SB). Through much fiddling I was able to figure out the coordinates of the poster, and the exact transforms that I'd applied to it, and then I re-constructed the tiles from the big poster, allowing me to complete the project.
This is also the reason why the little map widget on the app shows the path going through buildings: those buildings weren't there when I took my pictures.
Building the interface — I experimented with several ways for the user to navigate around. My first attempt involved just the plain pictures with no lines drawn on them. You could click anywhere, and it'd figure out which new position best matched your click. It turned out to be rather hard to use, though, because it was quite unpredictable, and it would let you walk through walls, which was confusing.
Next I tried manually defining for each picture a "clickmap", that dictated where you could click, and where it'd take you. A little arrow appeared on mouseover to explain the mouvement that would take place of you clicked (e.g. a left arrow showing that if you click here, you'll turn left). It didn't work great, and it involved yet even more manual work. I don't have screenshots unfortunately.
Eventually I settled on the "Yellow Brick Road". The yellow lines are drawn automatically based on what the system knows about each picture's position and orientation. Because I didn't enter any elevation information, the system assumes that the world is flat, which is why things can look a bit weird around steps and slopes. But overall I'm fairly happy with the result.
Since building this thing I found out that Google actually lets people build street view "worlds" using their own software:
Anyway, if you've read this far I'd be happy to hear any feedback you might have. You can contact me by mail: