#### Topics

##### Relations and Functions

##### Relations and Functions

##### Inverse Trigonometric Functions

##### Algebra

##### Matrices

- Introduction of Operations on Matrices
- Inverse of a Matrix by Elementary Transformation
- Multiplication of Two Matrices
- Negative of Matrix
- Properties of Matrix Addition
- Transpose of a Matrix
- Subtraction of Matrices
- Addition of Matrices
- Symmetric and Skew Symmetric Matrices
- Types of Matrices
- Proof of the Uniqueness of Inverse
- Invertible Matrices
- Elementary Transformations
- Multiplication of Matrices
- Properties of Multiplication of Matrices
- Equality of Matrices
- Order of a Matrix
- Matrices Notation
- Introduction of Matrices
- Multiplication of a Matrix by a Scalar
- Properties of Scalar Multiplication of a Matrix
- Properties of Transpose of the Matrices

##### Calculus

##### Vectors and Three-dimensional Geometry

##### Determinants

- Applications of Determinants and Matrices
- Elementary Transformations
- Inverse of a Square Matrix by the Adjoint Method
- Properties of Determinants
- Determinant of a Square Matrix
- Determinants of Matrix of Order One and Two
- Introduction of Determinant
- Area of a Triangle
- Minors and Co-factors
- Determinant of a Matrix of Order 3 × 3
- Rule A=KB

##### Linear Programming

##### Continuity and Differentiability

- Derivative - Exponential and Log
- Concept of Differentiability
- Proof Derivative X^n Sin Cos Tan
- Infinite Series
- Higher Order Derivative
- Algebra of Continuous Functions
- Continuous Function of Point
- Mean Value Theorem
- Second Order Derivative
- Derivatives of Functions in Parametric Forms
- Logarithmic Differentiation
- Exponential and Logarithmic Functions
- Derivatives of Implicit Functions
- Derivatives of Inverse Trigonometric Functions
- Derivatives of Composite Functions - Chain Rule
- Concept of Continuity

##### Probability

##### Applications of Derivatives

- Maximum and Minimum Values of a Function in a Closed Interval
- Maxima and Minima
- Simple Problems on Applications of Derivatives
- Graph of Maxima and Minima
- Approximations
- Tangents and Normals
- Increasing and Decreasing Functions
- Rate of Change of Bodies or Quantities
- Introduction to Applications of Derivatives

##### Sets

- Sets

##### Integrals

- Definite Integrals Problems
- Indefinite Integral Problems
- Comparison Between Differentiation and Integration
- Geometrical Interpretation of Indefinite Integrals
- Integrals of Some Particular Functions
- Indefinite Integral by Inspection
- Some Properties of Indefinite Integral
- Integration Using Trigonometric Identities
- Introduction of Integrals
- Evaluation of Definite Integrals by Substitution
- Properties of Definite Integrals
- Fundamental Theorem of Calculus
- Definite Integral as the Limit of a Sum
- Evaluation of Simple Integrals of the Following Types and Problems
- Methods of Integration: Integration by Parts
- Methods of Integration: Integration Using Partial Fractions
- Methods of Integration: Integration by Substitution
- Integration as an Inverse Process of Differentiation

##### Applications of the Integrals

##### Differential Equations

- Linear Differential Equations
- Solutions of Linear Differential Equation
- Homogeneous Differential Equations
- Differential Equations with Variables Separable Method
- Formation of a Differential Equation Whose General Solution is Given
- General and Particular Solutions of a Differential Equation
- Order and Degree of a Differential Equation
- Basic Concepts of Differential Equation
- Procedure to Form a Differential Equation that Will Represent a Given Family of Curves

##### Vectors

- Direction Cosines
- Properties of Vector Addition
- Geometrical Interpretation of Scalar
- Scalar Triple Product of Vectors
- Vector (Or Cross) Product of Two Vectors
- Scalar (Or Dot) Product of Two Vectors
- Position Vector of a Point Dividing a Line Segment in a Given Ratio
- Multiplication of a Vector by a Scalar
- Addition of Vectors
- Introduction of Vector
- Magnitude and Direction of a Vector
- Basic Concepts of Vector Algebra
- Vectors and Their Types
- Components of Vector
- Section Formula
- Vector Joining Two Points
- Vectors Examples and Solutions
- Projection of a Vector on a Line
- Introduction of Product of Two Vectors

##### Three - Dimensional Geometry

- Three - Dimensional Geometry Examples and Solutions
- Introduction of Three Dimensional Geometry
- Equation of a Plane Passing Through Three Non Collinear Points
- Relation Between Direction Ratio and Direction Cosines
- Intercept Form of the Equation of a Plane
- Coplanarity of Two Lines
- Distance of a Point from a Plane
- Angle Between Line and a Plane
- Angle Between Two Planes
- Angle Between Two Lines
- Vector and Cartesian Equation of a Plane
- Shortest Distance Between Two Lines
- Equation of a Line in Space
- Direction Cosines and Direction Ratios of a Line
- Equation of a Plane in Normal Form
- Equation of a Plane Perpendicular to a Given Vector and Passing Through a Given Point
- Plane Passing Through the Intersection of Two Given Planes

##### Linear Programming

##### Probability

- Variance of a Random Variable
- Probability Examples and Solutions
- Conditional Probability
- Multiplication Theorem on Probability
- Independent Events
- Bayes’ Theorem
- Random Variables and Its Probability Distributions
- Mean of a Random Variable
- Bernoulli Trials and Binomial Distribution
- Introduction of Probability
- Properties of Conditional Probability

- Partition of a sample space
- Theorem of total probability

## Notes

If `E_1, E_2 ,..., E_n` are n non empty events which constitute a partition of sample space S, i.e. `E_1, E_2 ,..., E_n` are pairwise disjoint and `E_1 ∪ E_2 ∪ ... ∪ E_n` = S and A is any event of nonzero probability, then

P(Ei|A) =`(P(E_i) P (A | E_i))/( sum_(i=1)^n P(E_j) P(A|E _j ))`

P for any i = 1, 2, 3, ..., n

**Proof:** By formula of conditional probability, we know that

`P(E_i|A) = (P(A ∩ E_i )) / (P(A))`

`= (P(E_i ) (P(A|E_i )))/ (P(A))` (by multiplication rule of probability)

`= (P(E_i )P(A|E_i ))/ (sum_(j = 1)^n P(E _j)P(A|E_j)) ` (by the result of theorem of total probability)

**Remark:** The following terminology is generally used when Bayes' theorem is applied. The events `E_1, E_2, ..., E_n` are called hypotheses.

The probability `P(E_i)` is called the priori probability of the hypothesis `E_i`

The conditional probability `P(E_i |A)` is called a posteriori probability of the hypothesis `E_i`.

Bayes' theorem is also called the formula for the probability of "causes". Since the `E_i's` are a partition of the sample space S, one and only one of the events `E_i` occurs (i.e. one of the events `E_i` must occur and only one can occur). Hence, the above formula gives us the probability of a particular Ei (i.e. a "Cause"), given that the event A has occurred.

Video link : https://youtu.be/UVx7q7qN-6k

**1) Partition of a sample space: **

A set of events `E_1, E_2, ..., E_n` is said to represent a partition of the sample space S if

(a) `E_i ∩ E_j = φ, i ≠ j, i, j = 1, 2, 3, ..., n`

(b) `E_1 ∪ Ε_2 ∪ ... ∪ E_n= S` and

(c) `P(E_i) > 0 "for all" i = 1, 2, ..., n.`

In other words, the events `E_1, E_2, ..., E_n` represent a partition of the sample space S if they are pairwise disjoint, exhaustive and have nonzero probabilities.

As an example, we see that any nonempty event E and its complement E′ form a partition of the sample space S since they satisfy E ∩ E′ = φ and E ∪ E′ = S.

**2) Theorem of total probability:**

Let `{E_1, E_2,...,E_n}` be a partition of the sample space S, and suppose that each of the events `E_1, E_2,..., E_n` has nonzero probability of occurrence. Let A be any event associated with S, then

`P(A) = P(E_1) P(A|E_1) + P(E_2) P(A|E_2) + ... + P(E_n) P(A|E_n)`

= ` sum _(j=1) ^ n P(E_j) P(A|E_j)` **Proof :** Given that `E_1, E_2,..., E_n` is a partition of the sample space S in following fig.

Therefore , S =` E_1 ∪ E_2 ∪ ... ∪ E_n` ... (1)

and `E_i ∩ E_j = φ, i ≠ j, i, j = 1, 2, ..., n`

Now, we know that for any event A,

A = A ∩ S

=` A ∩ (E_1 ∪ E_2 ∪ ... ∪ E_n)`

= `(A ∩ E_1) ∪ (A ∩ E_2) ∪ ...∪ (A ∩ E_n)`

Also A ∩ `E_i` and A ∩ `E_j` are respectively the subsets of `E_i` and `E_j`. We know that `E_i` and `E_j` are disjoint, for i ≠ j, therefore, `A ∩ E_i` and `A ∩ E_j` are also disjoint for all i ≠ j, i, j = 1, 2, ..., n.

Thus,

`P(A) = P [(A ∩ E_1) ∪ (A ∩ E_2)∪ .....∪ (A ∩ E_n)]`

= `P (A ∩ E_1) + P (A ∩ E_2) + ... + P (A ∩ E_n)`

Now, by multiplication rule of probability, we have

`P(A ∩ E_i) = P(E_i) P(A|E_i) as P (E_i) ≠ 0 ∀ i = 1,2,..., n`

Therefore, P (A) = `P (E_1) P (A|E_1) + P (E_2) P (A|E_2) + ... + P (E_n)P(A|E_n)`

or `P(A) = sum_(j = 1)^n P(E_j) P(A|E_j)`

Video list :https://youtu.be/_jY8B_0dZgo