Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When IMU is on, estimation is even worse than pure visual mode? #3

Open
highlightz opened this issue Jul 8, 2017 · 20 comments
Open

Comments

@highlightz
Copy link

highlightz commented Jul 8, 2017

Thanks for your sharing at first.
As you may know, in EuRoC V102 dataset, there are continuous rotational motions in the middle part, which I think could pose great challenges to a mono-based slam system. Meanwhile, when IMU information is fused, the estimation progress should have been robust. So I build your algorithm on my Ubuntu 14.04 machine with configurations as Intel® Core™ i5-3230M CPU @ 2.60GHz × 4, 8G memory. The result surprises me, because the running with IMU module off is better than with IMU on, though there are several failures in pure visual mode, the loop closure function is available and can successfully relocalize. However, when IMU is on, the state just flies out of the vicon room, which I thought would not possibly happen. Below is the runnings.
IMU on:
withimu
IMU off:
noimu
What are the possible reasons? Look forward to hearing from you.

@gaoxiang12
Copy link
Owner

Thank you for discussion. When you say "IMU is worse" I think it has two aspects: (1) VIO may fail, causing the pose estimator flying away. (2) If IMU's working well, the accuracy is worse than pure vision. So I think you mean the first case, right?
For IMU optimization, I use Wang Jing's IMU preintegration and change ORB's Local mapping bundle adjustment to optimize the IMU related error. However, when the vision is weak, which means the camera cannot observe enough features, the IMU's bias and the camera's speed may result in a bad estimation and the 6 DoF pose just flies away. This phenomenon may also occur in pure IMU estimation when we cannot obtain the biases and the pose will fly in a few seconds.
In ORB's case, we have a global map. It's scale is fixed after monocular and IMU initialization but this scale may not be very accurate (about 95%~1.05% of the ground-truth in my experiments). This inaccurate scale will affect the accelerator's bias and the speed estimation
Also, IMU has some parameters to tune with. The bias's random walk covariance and the gyro/acce measurement noise and so on. I didn't spend many time optimizing those parameters. You can try increasing the covariance values because I think the IMU is not so accurate as it is claimed.

@highlightz
Copy link
Author

Yeah, you are right. I mean the first case as you described, where VIO fails, and system seems to propagate IMU integration as the estimator's output, and the trajectory flies away quickly.
So I guess below file describes those about IMU parameter tuning.
`
#include "imudata.h"
namespace ygz {
// covariance of measurement
// From continuous noise_density of dataset sigma_g/sigma_a rad/s/sqrt(Hz) -- m/s^2/sqrt(Hz)

/**
 * For EuRoc dataset, according to V1_01_easy/imu0/sensor.yaml
 * The params:
 * sigma_g: 1.6968e-4       rad / s / sqrt(Hz)
 * sigma_gw: 1.9393e-5      rad / s^2 / sqrt(Hz)
 * sigma_a: 2.0e-3          m / s^2 / sqrt(Hz)
 * sigma_aw: 3.0e-3         m / s^3 / sqrt(Hz)
 */

// *1e3/*1e2 chosen by experiments
double IMUData::_gyrBiasRw2 = 2.0e-5 * 2.0e-5 * 10;  //2e-12*1e3
double IMUData::_accBiasRw2 = 5.0e-3 * 5.0e-3 * 10;  //4.5e-8*1e2

Matrix3d IMUData::_gyrMeasCov =
        Matrix3d::Identity() * 1.7e-4 * 1.7e-4 / 0.005 * 100;       // sigma_g * sigma_g / dt, ~6e-6*10
Matrix3d IMUData::_accMeasCov =
        Matrix3d::Identity() * 2.0e-3 * 2.0e-3 / 0.005 * 100;       // sigma_a * sigma_a / dt, ~8e-4*10

// covariance of bias random walk
Matrix3d IMUData::_gyrBiasRWCov = Matrix3d::Identity() * _gyrBiasRw2;     // sigma_gw * sigma_gw * dt, ~2e-12
Matrix3d IMUData::_accBiasRWCov = Matrix3d::Identity() * _accBiasRw2;     // sigma_aw * sigma_aw * dt, ~4.5e-8

}
`
Am I correct?

On the other hand, I noticed that when VIO fails, the visual part of the system does not seem to try to relocalize itself in the global map, i.e. the visual part has no respond at all. This confuses me a lot. Is it possible to prohibit the IMU output long duration's integration result(which is certainly not reliable)?

@gaoxiang12
Copy link
Owner

Exactly. You can change the value of these parameters to see the effect. But I'm afraid this won't make a very large change in performance.
For the second problem, since I still use the IMU predicted pose when the vision lost, so it may fly away if we can't re-initilize vision after a few second. A better way to deal with such situation may be to keep the current pose estimation and wait BoW to relocation itself.
I'm currently working on a stereo inertial visual odometry based on this repo and I hope to solve this problem because stereo inertial is a bit easier problem. Improving ORB is really headache. You need to change many related codes (the stereo, RGBD case) when working on monocular vio to keep the overall system work.

@highlightz
Copy link
Author

highlightz commented Jul 19, 2017

I see in source code that
// ...
{
if (mbUseIMU == false) {
LOG(INFO) << "Try relocalization" << endl;
bOK = Relocalization();
} else {
//TODO: add re-localization. Need to re-initialize P/V/R/bg/ba
LOG(ERROR) << "TODO: add relocalization. Lost, reset." << endl;
// mpSystem->Reset();
return;
}
}
// ...

So what do I need to do if I want to make the VIO mode relocalize itseft by vision after lost?
Speciffically, I guess I must add Relocalization() method in the else branch, but except this, the comment saids Need to re-initialize P/V/R/bg/ba, does this mean that the same initialization process as a newly starting system is performed again? However, what confuses me is that at the lost place, the current pose of camera should not be initialized from scatch, how should I process this situation? Looking forward to your instructions and help.

@gaoxiang12
Copy link
Owner

Please note that the initialization in VIO is a bit more complicated than in pure vision case. You need to estimate the gravity, velocity, scale and the IMU biases during the initialization process, rather than the initial pose and map in SLAM. The initialization may also fail if you don't provide sufficient motion because the scale and accelerator biases are co-related and thus having ambiguity. We have provided a initialization process in the code and you may call it after lost.

A probably more convenient way of using VIO is only use gyroscope to estimate the rotation and ignore the translation when the vision is not working. Because the integrated angle is likely to be right, you can still have a rotation estimation in such cases. And the accelerator may diverge fast if you don't know the speed and its bias.

@highlightz
Copy link
Author

Since there is an initialization process in the code, why haven't you called them yet? As shown in the above code of my last comment, you seem to omit this process and put TODO entry here. May I know the reason? By the way, could you let me know where I can find the initialization process?

@gaoxiang12
Copy link
Owner

The IMU init is written in an extra thread and it usually needs 30 seconds to converge. I wonder if this is a good way to init the IMU and I'm trying other methods (in a new repo because ORB is hard to debug). Please take a look at ygz-stereo-inertial where I use a very simple strategy initing stereo-IMU VO system.

@TouqeerAhmad
Copy link

@highlightz @gaoxiang12 I am also trying to get around this TODO for relocalization in case of IMU being used -- do you guys have any update on that? Any help in this regards will be great!

@highlightz
Copy link
Author

@TouqeerAhmad See https://github.com/jingpang/LearnVIORB/ for what you want. In this project, all need IMU reinitialization functions are provided at file Tracking.cc.

@TouqeerAhmad
Copy link

Thank you @highlightz I will look into it. Just a follow up -- were you able to run YGZ-SLAM on your own data? specifically data from an android mobile device

@highlightz
Copy link
Author

@TouqeerAhmad Yeah, I feed my iPhone 6S image and IMU data to YGZ. Because the dataset is easy to handle( with smooth motion and enough textures), it runs well. But I've never tested android data.
iphone6s

@TouqeerAhmad
Copy link

@highlightz would you mind sharing the iPhone 6 data and the respective config file? I have S8/S7 data but not from and ios device. How did you do your cam-imu calibration? did you use kalibr for that?

Thanks again for your help!

@highlightz
Copy link
Author

Yeah, use Kalibr to acquire extrinsic matrix as well as IMU parameters please.
Below is iphone6S dataset, with the same layout as EuRoC benchmark.
https://drive.google.com/open?id=0B7wDTHnOtxWUQjhvMWZiQ2c2d1U

@TouqeerAhmad
Copy link

@highlightz Thanks for sharing the data set. I just tried to run it but for some reason it crashed right after updating the map scale and Map NavState -- did you make any changed to YGZ-SLAM to run with iphone 6 data -- as for me it runs fine for euroc data sets but crashing for android and now for iphone, do we need to make any change in the code based on device/sensors? other than just replacing the config file

Really appreciate your continuous help!
screenshot from 2017-10-11 21-21-44

@TouqeerAhmad
Copy link

@highlightz Also the trajectory you posted above is that for the same data set you shared? -- as I see you are moving in a loop around desks but there is no loop in the trajectory. please let me know if I missed something here

@highlightz
Copy link
Author

  1. I made no change to source code, and only replace necessary parameters in config file.
  2. The above trajectory I posted is from the dataset I shared and is only part of my running.

As for your crash, you need to debug your program, and trace back the reason why your program crash.

@TouqeerAhmad
Copy link

@highlightz Can you please confirm the layout of the mobile phone when you collected the data. Was the mobile in landscape mode?, and

  1. The home button was on left or
  2. The home button was on the right

I collected some data with my friends iphone 6 and the acc_x for me is close to negative 9.8 but for your data set that you shared it was always close to positive 9.8. Then I changed the app and it became 9.8 positive. Can you please confirm the above.

@zxp771
Copy link

zxp771 commented Mar 28, 2018

@highlightz
Hi
could you please share your iPhone 6s dataset?
BTW I'm trying to make my own dataset, could you share your experiment of make your dataset to me?
Thanks for your help.

@a00achild1
Copy link

Hi, I recently try to use IMU with mono version and also get bad result.
After I check the scale result output from TryInitVIO, I found out it is negative sometimes.
I wonder this is right or not. Should the scale solved as negative value with the algorithm?
If yes, maybe we should try to block this kind of condition and retry the initialization.

It will be very helpful if someone could give some comment with this,
Thanks a lot!

@450932894
Copy link

Hi!
Did you find the ROS for ygz-slam? Can you share the url?
thank you! @highlightz @TouqeerAhmad @a00achild1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants