One of the related results about the moments and probability distribution of a random variable transformation is the transformation theorem (or vector).

A transformation in mathematics is a function f that maps a set X to itself, i.e. f: X —>X. It usually has some geometrical grounding. Linear vector space transformations and geometric transformations, such as projective transformations, affine transformations, and specific affine transformations like rotations, reflections, and translations, are examples.

**Partial Transformation:-**

While it is usual to refer to any function of a set into itself as a transformation (particularly in words like “transformation semigroup” and similar), there is another type of terminological convention in which the term “transformation” is reserved only for bijections. A partial transformation is a function f: A —> B, where both A and B are subsets of a set X, where such a restrictive definition of transformation is generalised to partial functions.

**Some basics about congruence transformations:-**

When we look in the mirror, we see ourselves, or more accurately, a picture of ourselves that looks just like us. We’re reflecting on ourselves in the mirror and looking back at ourselves. This is known as a reflection in mathematics, and it is an example of a congruence transformation.

When two objects have the same shape and size, we call them congruent. Our reflections in a mirror, for example, have the same shape and size as we do, thus we would say that we are congruent with our reflection.

We can think of our reflection in the mirror as turning ourselves over the mirror so that we are staring back at ourselves. In other words, we may manipulate an original item, ourselves, to fit perfectly over our reflection in the mirror. Another approach to defining congruent items is this. That is, two things are congruent if one of them can be moved without affecting its shape or size so that it fits exactly over the other image. These movements are known as congruence transformations.

Congruence transformations are operations on an object that result in the creation of a congruent object. Congruence transformations can be divided into three categories:

- Translation (slide)
- Rotation (turn)
- Reflection (flip)

**The transformation:-**

Assume that X is a random variable with a known distribution.

**Formulae for one to one function:-**

There are equations for the probability mass (or density) and the distribution function of Y if the function g is one-to-one (e.g., strictly growing or strictly decreasing).

The course on functions of random variables explains and proves these formulas, which are also known as transformation theorems.

In the lesson on functions of random vectors, its generalization to the multivariate case (where X is a random vector) is examined.

**Law of unconscious statistician:-**

We can simply compute the expected value and other moments of Y when the function $g$ is not one-to-one and there are no straightforward techniques to deduce the distribution of Y, due to the so-called Law of the Unconscious Statistician (LOTUS).

The LOTUS, as seen in the diagram below, is also known as the transformation theorem.

**LOTUS for discrete random variable:-**

The theorem is as follows for discrete random variables.

Proposition: Let g: R →R be a function and X be a discrete random variable Define

Y=g(X)

Then,

E[Y] =EgX=x∈RxgxpX(x)

The support of X is Rx, and its probability mass function is pX(x) .

Unlike the usual formula, the above formula does not require us to know the support and probability mass function of Y.

EY=y∈RyypY(y)

**LOTUS for continuous random variables:-**

The theorem goes like this for continuous random variables:

Proposition: Let g: R →R be a function and X a continuous random variable. Define

Y=g(X)

Then,

EY=EgX=–gxfXxdx

Where, X’s probability density function is fX(x).

Unlike the usual formula

EY=–yfYydy

the above formula does not require us to know the probability density function of Y.

LOTUS for other moments:-

Any moment of Y can be computed using the LOTUS, as long as the moment exists:

EYn=EgXn=x∈RxgXnpXx

EYn=EgXn=–gXnfXxdx

LOTUS for moment generating and characteristics function:-

The moment generating function (mgf) can be computed using the LOTUS.

Myt=Eety=E[e-tgX]

The distribution of Y is completely described by the mgf or moment generating function.

If we can calculate the above anticipated value and realise that [eq14] is the joint mgf of a known distribution, then we know it’s the Y distribution. In reality, if and only if two random variables have the same mgf, they have the same distribution.

Similar observations can be made for the characteristic function.

φyt=Eeity=E[e-itgX]

**Conclusion:-**

The transformation is a function, f, which maps to itself, i.e. f: X → X. After the transformation, the pre-image X becomes the picture X. Any or a combination of operations such as translation, rotation, reflection, and dilation can be used in this transformation.