Enabling auto calibration in virtualizer cause hang in unity

I tried to enable auto calibration under object of ak_autocalibration. but the system will get hang after click the play button. I debugged into the code and found the hang is due to the depth of sample_point is always 0. so no sample point will be added to the variable samplePoints. And that the depth is 0 is because the data in buffer of “camInfoList[cc].depthBytes” is always 0. and then I found this buffer is regeistered using function “registerBuffer”.
So my question is, is there any settings that I have to do to make kinect SDK to write data to this registered buffers?
Below is the code snippet for your information.

I forgot to say that I can get the Kinect work if not enabling the auto calibration.

Here is the log of ARplugin:
AKPlugin Thread 0: starting grab thread: 0
AKPlugin Thread 0: camera config color format: 3
AKPlugin Thread 0: camera config color resolution: 2
AKPlugin Thread 0: camera config depth mode: 2

Great to hear that it’s at least working without the auto calibration!

To my knowledge, we haven’t been using the auto calibration feature in our recent demos, we’ve just been calibrating them manually, but (I think) at one point in time that feature was working (I think it took awhile to run but it eventually produced a result).

Not sure what the current status of that feature is – perhaps @valentin or @jhobin know more about it.

I haven’t personally encountered this error when autocalibrating but it’s been a while since I’ve tested the feature. Our usual process is to autocalibrate then refine the resulting calibration manually. Thank you for the in-depth breakdown of what’s going wrong!

@jhobin , @ben hank you for answering my questions. still about calibration, do I need to some special images like chessboard to do the calibration? and how can I align live captureed scene with the pre reconstructed 3D scene? I didn’t see there are lines of code to do this step.

I just set up a new Virtualizer instance yesterday and was able to autocalibrate it as expected. Autocalibration doesn’t require a special image and instead tries to find a shared plane between all of the cameras. The final alignment with the matterport scan has to be done manually.

@Hugo01 The auto calibration is more an experiment. We would love others to contribute and make it better.

The ground plane detections works very well, the Auto alignment not so much. You need to rotate and move the cameras along the plane manually.

@valentin @jhobin @ben thank you for answering my question. Finally I got the feature of autocalibration working by adding below code before calibrations begins:

bool checkCameraReadiness()
{
    for (int cc = 0; cc < camInfoList.Count; cc++)
    {
        //int numPoints = camInfoList[cc].depth_width * camInfoList[cc].depth_height;

        byte[] depthBytes = camInfoList[cc].depthBytes;

        bool has_non_zero_depth = false;
        foreach (byte depth in depthBytes)
        {
            if (depth != 0)
            {
                has_non_zero_depth = true;
                break;
            }
        }

        if (!has_non_zero_depth)
        {
            Debug.Log("Depth buffer is not really ready!");
            return false;
        }
    }
    return true;
}

But Now I am in a situation in which the live captures are not aligned with the existing reconstructed 3D scene. Hhow I can mannually align the live caputures to the captured reconstructured scene in unity. Can I just change the coordinate system of the captured 3d object model to achieve this? or do I need to change something of the Virtualcam under gameObject of machineorigin?

1 Like

Excellent! When I next have time I’ll integrate this code with a new open source release of the virtualizer.

We first make sure the scan has the correct scale and a good enough positioning as you describe. Second, we move around the visualization objects of each kinect (the game objects that draw the kinect’s depth data in space) to refine the autocalibration. This manual movement of the kinect representations can be saved using the same save button as the autocalibration.

The virtualCam object under machineOrigin is positioned by the remote operator client and can safely be ignored for now.

@valentin can weigh in more on the specifics of manual alignment after calibration if I missed anything.