Time complexity is one of the most important concepts in DSA. It tells us how the running time of an algorithm changes when the input size gets bigger.
We donβt use exact seconds here β we care about the growth rate of the work done as input becomes large (like 10 β 100 β 10,000 β 1 million).
Simple idea: Time complexity answers the question β βIf my data becomes 10Γ bigger, how much slower (or faster) does my program become?β
You go to a library and always take the first book from the shelf.
It doesnβt matter if the shelf has:
Time taken is almost the same β you just pick the first one.
β This is like a very fast algorithm whose time does not grow with input size.
You are searching for βRahulβ in a list of names. You check one by one from start to end.
Rahul might be:
Worst case: You may need to look at the entire list.
β More names = more time (time grows with input size).
A teacher wants to arrange 100 exam papers from lowest to highest marks.
They compare papers again and again to put them in correct order.
β When data grows, work grows much faster than the data itself.
for(int i = 0; i < n; i++) {
cout << i << endl;
}
If n = 5 β loop runs 5 times
If n = 100 β loop runs 100 times
If n = 1,000,000 β loop runs 1 million times
Idea: Work done is directly proportional to input size n.
for(int i = 0; i < n; i++) {
cout << "First loop: " << i << endl;
}
for(int j = 0; j < n; j++) {
cout << "Second loop: " << j << endl;
}
If n = 5 β total 10 steps (5 + 5)
If n = 100 β total 200 steps (100 + 100)
Idea: Still grows linearly β just twice as much work, but same growth pattern.
Practice: Think of 2β3 daily tasks and decide β does time grow with more items or stay almost same?