[Kwiver-users] Starting with MapTK
Matthew Leotta
matt.leotta at kitware.com
Mon Mar 14 14:30:16 EDT 2016
> On Mar 14, 2016, at 1:56 PM, OSMANOGLU, BATUHAN (GSFC-6180) <batuhan.osmanoglu at nasa.gov> wrote:
>
> Hello Matt,
>
> Thanks so much for the detailed answers. I think I will wait for the release of MapTK 0.8.0 and the tutorial. Do you mind sending an email to the group when they are released?
I certainly will make an announcement when these are available.
>
> The POS data sounds manageable. We had a bunch of different GPS/IMU sensors on board the flight, some of which had vendor specific formats. But for our in-house codes we use ascii as well, so matching the POS format is not a problem.
>
> The video I am planning to use was collected through a nadir-looking window of the plane, and I am planning to use the plane’s avionics data for the POS data. Thanks for mentioning the pos2krtd to check for errors. I am expecting some differences between the camera and GPS antenna position… Also I don’t know much about the camera that collected the imagery. I will look into obtaining that to find more about camera distortion etc.
You don’t necessarily need to know everything about the camera. In theory you can estimate everything from the data. However, in practice you should use whatever information you have about camera intrinsics, at least for initialization purposes. This is especially true if you have a very narrow field of view (high zoom) lens and if there is substantial radial distortion. You can tell MAP-Tk to estimate any subset of the camera intrinsic parameters, but convergence may slow or could even fail if you have too many unknowns. There is also a know degeneracy when estimating radial distortion from a NADIR looking camera [1]. As a first pass I would set skew to zero, use the center of the image as the principal point, and use zero distortion (unless you see noticeable distortion in the video). You can have MAP-Tk estimate the focal length of the lens, but a reasonable guess for initialization can help considerable. Usually you can figure this out from the specs on the camera sensor and lens that were used.
[1] http://ccwu.me/file/radial.pdf
>
> The PLY format should be OK. I haven’t played with it much but if blender is opening it that is perfect.
>
> It seems like at this point, I don’t need to worry about trying to get access to forge.mil. If we find out that it would be beneficial in the future I will try to contact them again.
>
> All the best,
> Batu.
>
> From: Matthew Leotta <matt.leotta at kitware.com <mailto:matt.leotta at kitware.com>>
> Date: Monday, March 14, 2016 at 9:10 AM
> To: "Osmanoglu, Batuhan (GSFC-618.0)[USRA]" <batuhan.osmanoglu at nasa.gov <mailto:batuhan.osmanoglu at nasa.gov>>
> Cc: "kwiver-users at public.kitware.com <mailto:kwiver-users at public.kitware.com>" <kwiver-users at public.kitware.com <mailto:kwiver-users at public.kitware.com>>
> Subject: Re: [Kwiver-users] Starting with MapTK
>
> Batu,
>
> I’ll address your questions inline below.
>
>> On Mar 11, 2016, at 5:19 PM, OSMANOGLU, BATUHAN (GSFC-6180) <batuhan.osmanoglu at nasa.gov <mailto:batuhan.osmanoglu at nasa.gov>> wrote:
>>
>> Hi all,
>>
>> I have an airborne dataset collected over forested areas. Looking into Surface/Structure From Motion algorithms I cam across MapTK and would like to give it a shot. We are interested in getting tree heights…
>>
>> I couldn’t get access to the forge.mil, so I am working with the public version at the moment. Not sure if there are differences.
>
> None of MAP-Tk is on forge.mil. However, some other parts of KWIVER have a component on forge.mil. We do have an internal development branch of MAP-Tk that is not public. This is where we do much of the development sponsored by the Air Force. Every three months we request approval for that code from the Air Force and once we get approval we fold that code back into the public Github master branch. There is a large chunk of code pending approval right now. I expect approval to come through any day now. Once approved I will push the changes to Github and shortly after that I’ll be releasing MAP-Tk v0.8.0. If you can wait a couple weeks, you’ll find MAP-Tk v0.8.0 to be considerably easier to configure.
>
>>
>> I couldn’t find a tutorial on MapTK (I guess Matt is working on it), but reading some of the emails on this list, I think I will have to do:
>> Extract frames from the video (ffmpeg etc.)
>> Use the config files under /tools/config as template
> Correct. A tutorial will be coming out (pending Air Force approval) in April as a blog post and in the April edition of the Kitware Source (http://www.kitware.com/media/thesource.html <http://www.kitware.com/media/thesource.html>). This tutorial will come with some sample data and configuration files. It will focus on MAP-Tk v0.8.0.
>
> For now you do need to extract frames as images using FFmpeg or another tool. That will probably not change until MAP-Tk v0.9.0 later this year. In MAP-Tk v0.7.x you should use the config files in tools/config as a template. These files can be a bit unwieldy due to the number of nested algorithms. MAP-Tk v0.8.0 will support modular config files which can be included from other config files. A default set of config files will be installed with the software. This will make the top-level configuration much simpler. You will be able to include the default config files to get the default algorithms and parameters.
>
>
>> This is where it gets fuzzy ;)
>> What is the order/sequence of the MapTK commands?
>> maptk_track_features first etc..
> The primary two commands are maptk_track_features and maptk_bundle_adjust_tracks, run in that order. The first takes your image sequence and produces a feature track file. The second takes the feature track file and estimates camera parameters and 3D landmarks.
>
>>
>> How do I enter the camera position (or plane position info?)
>> What format does it need etc?
> MAP-Tk can reconstruct the camera positions and landmarks without any prior position info, but the solution is only up to an unknown similarity transform (global scale, orientation, and geo-location). If you have metadata about position (e.g. from GPS and IMU) you can use that. The metadata will provide initialization to the solution and also constrain the unknown similarity. Currently the only format we support for the metadata is a format used by the Air Force called POS. You can find an example of POS metadata with the CLIF 2007 data (https://www.sdms.afrl.af.mil/index.php?collection=clif2007 <https://www.sdms.afrl.af.mil/index.php?collection=clif2007>). There is one POS file per image, and each contains comma separated ASCII values for
>
> yaw, pitch, roll, latitude, longitude, altitude, gpsSeconds, gpsWeek, northVel, eastVel, upVel, imuStatus, localAdjustment, dstFlag
>
> We currently only use the first 6 fields in this file in MAP-Tk. The rest could be set to zeros. If you have another format for your metadata I be interested to know about it. There is no reason we couldn’t support other formats in the future. The other format we support for cameras is KRTD, another ASCII format which contains a 3x3 calibration matrix (K), a 3x3 rotation matrix (R), a 1x3 translation vector (t), and a 1xN distortion vector (d). For the distortion vector N can between 1 and 8, but you can use a single “0” to model no radial distortion. The KRTD file represents the cameras relative to some local origin. The KRTD files are not geo-located but can contain absolution scale and orientation. The POS files, on the other hand have geo coordinates, but do not contain any information on camera intrinsic parameters (focal length, distortion, etc.).
>
> There is a tool, pos2krtd, that converts POS files to KRTD given also a model of the camera intrinsics. The orientation angles (yaw, pitch, roll) can be tricky to get correct in the POS file if you are making the POS file from another data source. Converting POS to KRTD is a good way to check that the POS files are as expected. The KRTD files can be loaded directly into the MAP-Tk GUI application for viewing.
>
>>
>> Tools for displaying the point cloud?
>> Or do we get a file that has to be opened with python displayed in Mayavi etc…
>
> The point cloud comes out as a PLY file, which is fairly standard and can be viewed in numerous tools. The MAP-Tk GUI make it easy to view both the cameras and 3D point cloud together. In MAP-Tk v0.8.0 you can just “open” the same config files used on the command line to load the cameras and point cloud. MAP-Tk also comes with scripts to aid importing the results into other third party tools. MAP-Tk includes Python plugins for Blender (https://www.blender.org/ <https://www.blender.org/>) to load the KRTD files and Blender natively supports PLY. MAP-Tk v0.8.0 will also provide Ruby plugins for SketchUp (http://www.sketchup.com/ <http://www.sketchup.com/>) to import both KRTD cameras files and the PLY point cloud.
>
>>
>> I appreciate any suggestions, assistance ;) Oh and currently I am on MacOS, though can switch to others if there is a preferred environment.
>
> We support MacOS, Linux, and Windows. I do most of my development on MacOS as well, so that should not be a problem.
>
> MAP-Tk is still very much a work in progress, so please let us know if you run into trouble. If you are willing to get your hands dirty with the code, pull requests are also welcome for bug fixes and new features.
>
> Good luck,
> Matt
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/kwiver-users/attachments/20160314/37a03ef1/attachment-0001.html>
More information about the Kwiver-users
mailing list