Working to understand time complexity
Going through a coding BootCamp, there is limited time to absorb the content that is being taught. A key takeaway that you must master is self-instruction, as there are some things they leave for you to learn on your own. So, the first time I heard the term “the big O”, I had no idea what that was. Due to the concept of it, I think it may have been a topic that many students who were brand new to programming would not have understood.
As I started to work through more hackerrank problems, I found that I was able to brut force some solutions by nesting for loops inside one another. Yes, this worked to solve the problem itself, but what about the test cases that had hundreds of elements, or even thousands. I didn't factor this in early on in my hackerrank career.
I have recently started reading an algorithm book that has helped me start to grasp this concept. I'm still not a pro at being able to include the steps in an algorithm that may accompany the processing of each element. However, I have learned that constants are generally dropped when giving an idea of how fast an algorithm is.
Having an understanding of how fast my algorithm is has helped me start to look closer at my solution and try to figure out if there is a way to make it any faster. I'm looking to see if there are any unnecessary loops going on, maybe there is a condition or keyword I can add that will cut my steps short and avoid checking something I don't need to check.
I'm excited to continue to read through this book to have a much better grasp of this concept. It has done a fitting job of taking a lot of the math out of it, so I can try and take a crack at that once I have a stronger foundation.