Tracking 3D objects with some GUI and kinect utility. Table hold those objects, the Kinect and some projector at the ceiling. By so, I can know the position in 3D space and can perform every type of rotation, keeping in mind the relativity to the kinect. Next task is the projection of textures. First I have to recreate the whole scene Unity and placing the mesh at the tracked coordinates and movement. The camera will be used to the position and output will be used to get the images using the projector. This can only happen if I have projector position wrt kinect and intrinsic (with function_x_coorodinat,function_y_cordinate, relative_x_position, relative_y_position, params that are resulting in distortion) and at last, I have to apply those to Unity Camera.
Now this is the phase; I am stucked with. I am running camera with a projector using OpenCV but getting wrong results. Right now, I am using 6x5 array. And using inner corner positions to store the vector positions and showing it through the projector. Kinects is detecting/recognizing the 2D array pattern and able to get the corners using findCheesboard method and store those pattern in vector Kinect. I am collaborating the color and depth to get the real world 3D positions with all the inner corners using kinect3D = kinect->getCoordinates(kinect2D). Also, tried with cardboard over the given table, so I don't get the 3D in the same plane (due to the fact that 2D array resembling a chess board is always at the top).
This mapping of 2D projection to 3D real world mapping. In this projection with the help of intrinsics and extrinsic by using opencv library method of caliberatecamera. Kinect is 1m wrt table and also projection is 1m wrt to kinect and I am getting the huge error (value is exceeding 100). What is the solution? I have already tried with transforming to z=0 space from kinect coordinate. This way I don't have to worry about intrinsic factors. I have reduced the error, but it is not representing the correct solution as relative_y_position is not anywhere near resoultionHeight_factor/2.
Here is the code, I am trying to implement:-
bool FlowPut::calibrateProjector()
{
// create fullscreen window on second screen
namedWindow("projector", CV_WINDOW_NORMAL);
moveWindow("projector", -1920, 0);
setWindowProperty("projector", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
// construct chessboard pattern image & store inner corner positions
Size patternSize(6, 5);
int width = 1920;
int height = 1080;
int marginX = 0.1f * width;
int marginY = 0.1f * height;
int widthPerCell = (width - 2*marginX) / (patternSize.width + 1);
int heightPerCell = (height - 2*marginY) / (patternSize.height + 1);
Mat chessBoardPattern(height, width, CV_8UC1, Scalar(255));
vector<Point2f> cornersInProjector2D;
for (int r = 0; r < patternSize.height + 1; ++r) {
for (int c = 0; c < patternSize.width + 1; ++c) {
int x = marginX + c*widthPerCell;
int y = marginY + r*heightPerCell;
unsigned char color = 255 * ((c + r*(patternSize.width+1)) % 2);
Rect roi(x, y, widthPerCell, heightPerCell);
chessBoardPattern(roi).setTo(Scalar::all(color));
if (r > 0 && c > 0)
cornersInProjector2D.push_back(Point2f(x, y));
}
}
imshow("projector", chessBoardPattern);
waitKey(1);
// acquire calibration pairs (projector 2d to kinect 3d) from multiple views
int numViewsNeeded = 1;
Mat colorFrame, colorFrameGray, depthFrame;
vector<vector<Point3f> > objectPoints;
vector<vector<Point2f> > imagePoints;
vector<Eigen::Affine3f> transforms;
while (objectPoints.size() < numViewsNeeded) {
inputSource_->updateFrame();
inputSource_->getColorFrameRegistered(colorFrame);
inputSource_->getDepthFrame(depthFrame);
imshow("color", colorFrame);
imshow("depth", depthFrame);
if (waitKey(1) == 112) { // "p"
vector<Point2f> imagePointsLocal;
vector<Point3f> objectPointsLocal;
// find corners2D in kinect image
vector<Point2f> cornersInKinect2D;
cvtColor(colorFrame, colorFrameGray, CV_BGR2GRAY);
if (!findChessboardCorners(colorFrameGray, patternSize, cornersInKinect2D, CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS))
continue;
cornerSubPix(colorFrameGray, cornersInKinect2D, Size(5, 5), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
// calculate 3d positions
for (int i = 0; i < cornersInKinect2D.size(); ++i) {
Point2f pProjector2D = cornersInProjector2D[i];
Point2f pKinect2D = cornersInKinect2D[i];
Point3f pKinect3D = inputSource_->getXYZ(pKinect2D.x, pKinect2D.y);
if (isfinite(pKinect3D.x)) {
imagePointsLocal.push_back(pProjector2D);
objectPointsLocal.push_back(pKinect3D);
}
}
// transform 3D positions to their own coordinate system
PointCloud<PointXYZ>::Ptr cloud(new PointCloud<PointXYZ>());
for (const Point3f &p : objectPointsLocal)
cloud->push_back(PointXYZ(p.x, p.y, p.z));
Eigen::Affine3f transform;
PCLHelper::computePCATransform(cloud, transform, -Eigen::Vector3f::UnitZ());
transformPointCloud(*cloud, *cloud, transform);
for (int i = 0; i < imagePointsLocal.size(); ++i) {
const PointXYZ &p = cloud->points[i];
Point3f newP(p.x, p.y, p.z);
newP.z = 0;
objectPointsLocal[i] = newP;
}
// display
drawChessboardCorners(colorFrame, patternSize, Mat(cornersInKinect2D), true);
imshow("calib_" + to_string(objectPoints.size()), colorFrame);
// store
objectPoints.push_back(objectPointsLocal);
imagePoints.push_back(imagePointsLocal);
transforms.push_back(transform);
}
}
Mat cameraMatrix = Mat::eye(3, 3, CV_64F);
cameraMatrix.at<double>(0, 0) = 1; // wild guess
cameraMatrix.at<double>(1, 1) = 1; // wild guess
cameraMatrix.at<double>(0, 2) = 0.5 * width - 0.5;
cameraMatrix.at<double>(1, 2) = 0.5 * height - 0.5;
cameraMatrix.at<double>(2, 2) = 1.0;
Mat distCoeffs = Mat::zeros(8, 1, CV_64F);
vector<Mat> rvecs, tvecs;
int flags = 0;
double projectionError = calibrateCamera(objectPoints, imagePoints, Size(width, height), cameraMatrix, distCoeffs, rvecs, tvecs, flags);
for (const Mat &rvec : rvecs) {
Mat rMat;
Rodrigues(rvec, rMat);
}
destroyWindow("projector");
return true;
}
This is the job assigned for designing real-time tracking and displaying the relative position using Unity. This application will be utilized by CoastGuards and will be tracking the real time illegal movement along the US borders. Right now, I am stuck with this basic stuff and doing little snippet changes to get the output. I am assigned to show real-time movement in an area, which is secure, many types of movement is taking place and it is a CoastGuard base
http://militarybases.co/directory/air-station-cape-cod-coast-guard-base-in-cape-cod-ma/ . After successful implementation of the model, it will be integrated with the real-time drone (which will be using Radar technology to pinpoint the movement and passing the coordinates to the model), which will be displayed on a patrol officer computer.
Thanks for your time,