Camera Calibration – Part I

In the last few years we could notice that camera’s quality has been improved, its price has decreased at a point where we can find them almost everywhere. Most smart phones, if not all, have one or even two cameras. Youtube, Netflix, Flickr, Instagram are among the most popular websites around the world. At the same time, microprocessors became more powerful and cheaper in the way that programs can process image and video in real time.

In computer vision, the field of stereo vision is getting more and more attention. Mobile robotics navigation, augmented reality, 3D sensing, 3D scanning, 3D tracking are some examples of stereo vision applications.

In order to have accurate measure of the 3D world, the cameras must be calibrated. This is the basis of all stereo vision system. But why do we need to calibrate the cameras? First, camera’s sensor and lenses are not perfect and not perfectly assembled either. Each and every camera has different errors and tolerances which can’t be generalized. Second, in a stereo system, we must know where both cameras are installed, at which distance and angle of each other.

Camera parameters

As mentionned before, in a computer vision system, the cameras must be calibrated. But what actually must be calibrated? Using the simplest model of camera, the pinhole model, there are two sets of parameters, known as intrinsic and extrinsic parameters.

Intrinsic parameters

Intrinsic parameters are related to all internal camera parameters such as: focal length, principal point, lens distortions, etc.

Using homogeneous coordinates, we can model intrinsic parameters using the following matrix:

k_matrix

Where alpha x is the product of the physical focal length (fx) of the pinhole and the size of the individual imager elements (σx), given by pixels/millimiters. Similar hold for focal length fy and σy.

u0 and v0 model the displacement between the optical axis and of the center of coordinates on the projection screen.

For square imager fx and fy are equal. For low-cost cameras it can be rectangular and then fx is different of fy.

Sometimes K also has a skew parameter, here represented by S.

Extrinsic parameters

Extrinsic parameters are related to the relative position of the camera in relation to the object coordinate system. In a multi-camera systems, the extrinsic parameters also describe the relationship between the cameras.

extrinsic_model

Figure 1 – Camera versus world coordinate

In a 3D world, we can see that a camera can be rotated in all three axis and have a displacement  in relation to the object coordinate system.

The three possibles rotations have the following matrix:

Rotation_Matrix

We define R as the product of Rx, Ry and Rz.

The translation vector can be represented by T and then the extrinsic parameters of the camera can be modeled as

E_Formula

In the next posts we’ll cover some classic camera calibration methods. See you soon!

Marcelo Jo

Marcelo Jo is an electronics engineer with 10+ years of experience in embedded system, postgraduate in computer networks and masters student in computer vision at Université Laval in Canada. He shares his knowledge in this blog when he is not enjoying his wonderful family – wife and 3 kids. Live couldn’t be better.

LinkedIn 

2 Responses to 'Camera Calibration – Part I'

  1. Felipe Neves says:

    Nice post Jo!

    A question, I noticed you use the matrices based on Euler angles, does the calibration algorithm deals with the effect of a possible gimbal lock?

    Regards

    Felipe Neves.

  2. andre says:

    Great post, thanks for sharing!

Leave a Reply