The concept of algorithm has existed for centuries.
Etymologically, the word 'algorithm' is a combination of the Latin word algorismus, named after Muhammad ibn Musa al-Khwarizmi, a 9th-century Persian mathematician.
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms perform calculation, data processing, and/or automated reasoning tasks.
In computer systems, an algorithm is basically an instance of logic written in software by software developers to be effective for the intended "target" computer(s) to produce output from given input. An optimal algorithm, even running in old hardware, would produce faster results than a non-optimal algorithm for the same purpose, running in more efficient hardware; that is why algorithms, like computer hardware, are considered technology.
For a given problem multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer. It is important to distinguish between the notion of algorithm, i.e. procedure and the notion of problem computable by algorithm, i.e. mapping yielded by procedure. The same problem may have several different algorithms.
Unfortunately there may be a tradeoff between goodness (speed) and elegance (compactness) — an elegant program may take more steps to complete a computation than one less elegant.
This course provides an introduction to mathematical modeling of computational problems. It covers the common algorithms, algorithmic paradigms, and data structures used to solve these problems. The course emphasizes the relationship between algorithms and programming, and introduces basic performance measures and analysis techniques for these problems.