Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Geek Culture / Need help with Real Time 3D modeling issue in Unity

Author
Message
Angelic Reola
User Offline
Joined: 3rd Nov 2016
Location:
Posted: 3rd Nov 2016 10:19
Tracking 3D objects with some GUI and kinect utility. Table hold those objects, the Kinect and some projector at the ceiling. By so, I can know the position in 3D space and can perform every type of rotation, keeping in mind the relativity to the kinect. Next task is the projection of textures. First I have to recreate the whole scene Unity and placing the mesh at the tracked coordinates and movement. The camera will be used to the position and output will be used to get the images using the projector. This can only happen if I have projector position wrt kinect and intrinsic (with function_x_coorodinat,function_y_cordinate, relative_x_position, relative_y_position, params that are resulting in distortion) and at last, I have to apply those to Unity Camera.

Now this is the phase; I am stucked with. I am running camera with a projector using OpenCV but getting wrong results. Right now, I am using 6x5 array. And using inner corner positions to store the vector positions and showing it through the projector. Kinects is detecting/recognizing the 2D array pattern and able to get the corners using findCheesboard method and store those pattern in vector Kinect. I am collaborating the color and depth to get the real world 3D positions with all the inner corners using kinect3D = kinect->getCoordinates(kinect2D). Also, tried with cardboard over the given table, so I don't get the 3D in the same plane (due to the fact that 2D array resembling a chess board is always at the top).

This mapping of 2D projection to 3D real world mapping. In this projection with the help of intrinsics and extrinsic by using opencv library method of caliberatecamera. Kinect is 1m wrt table and also projection is 1m wrt to kinect and I am getting the huge error (value is exceeding 100). What is the solution? I have already tried with transforming to z=0 space from kinect coordinate. This way I don't have to worry about intrinsic factors. I have reduced the error, but it is not representing the correct solution as relative_y_position is not anywhere near resoultionHeight_factor/2.


Here is the code, I am trying to implement:-

+ Code Snippet

This is the job assigned for designing real-time tracking and displaying the relative position using Unity. This application will be utilized by CoastGuards and will be tracking the real time illegal movement along the US borders. Right now, I am stuck with this basic stuff and doing little snippet changes to get the output. I am assigned to show real-time movement in an area, which is secure, many types of movement is taking place and it is a CoastGuard base http://militarybases.co/directory/air-station-cape-cod-coast-guard-base-in-cape-cod-ma/ . After successful implementation of the model, it will be integrated with the real-time drone (which will be using Radar technology to pinpoint the movement and passing the coordinates to the model), which will be displayed on a patrol officer computer.

Thanks for your time,

Login to post a reply

Server time is: 2017-09-20 04:52:19
Your offset time is: 2017-09-20 04:52:19