Among all the jazz of ClearType fonts and graphics, we often take for granted what is actually some serious task—running the hardware of a computer. Ever wondered how different the actual working of a computer is, compared to what we see on the screen? In reality, it’s overwhelmingly difficult to imagine how computers work, especially today. It’s hard to believe that two voltage-based states of a bunch of transistors and gates are able to show to us what we see on the screen. Today’s computers complete those tasks in seconds, that humans would usually take anywhere from a few minutes to several days to accomplish.

In this post, we discuss:

To try to fathom what happens inside a computer (and this is of some importance), let’s go back to the days of DOS. As we all know from common sense today, a computer is nothing but a collection of hardware which runs some code to get work done. But how does the software talk to the hardware?

Basic structure of the OS

Imagine the operating system to be made of two thick layers:

  • The kernel
  • The shell

In simple terms, the kernel talks to the hardware, and is responsible for all the interactions the operating system has with the hardware. The shell, on the other hand, is responsible for interactions with the user. So this is how a computer should look like, in general:

  • User
  • Shell
  • OS libraries
  • Kernel
  • Hardware

So there’s the hardware. Then, there’s the kernel, the mediator between the hardware and the shell. And then there’s the shell, the mediator between the kernel and the user.

In the interest of simplicity of the interface, Microsoft, for one, made things in such a way that applications sat on top of the shell, and the applications interacted with the users. A few examples for these applications would be simple things like ping and telnet, and more complex things such as Adobe Photoshop. So the shell was not really exposed to the user. The command line interface (or CLI) tools sat on top of the shell. When the GUI was made, the GUI sat along with the CLI.

DOS booted up this way:

  • The hardware was powered on, and the Basic Input Output System (BIOS) ran the Power-On Self-Test (POST).
  • Once POST was complete, the BIOS looked for an operating system at the first sector of the first track of the first disk.
  • If an OS entry was found, BIOS started loading the OS onto the RAM:
    • First, a file called io.sys was loaded onto the RAM.
    • Then, msdos.sys was loaded.
    • The file, command.com was loaded onto the RAM.
  • BIOS relinquished control to MS-DOS, and the computer was ready for operation.

Does the sequence seem right: kernel, shell, and the CLI?

Many application could only interact with the CLI. Therefore, the applications' capabilities were restricted to what the CLI exposed to the application. And with CMD, the exposure was limited. Therefore, the output was not always usable for programming. The efficiency dropped significantly, and calling the classes within the internal framework was more than challenging.

Here are some of the challenges:

  • Too many commands to remember
  • Commands were not exactly systematic or uniform
  • Getting help seemed like juvenile text chat (dir /?)
  • Not-very-useful plain text output

Enter PowerShell

Overall, the Windows CLI had started to seem like a big mess people could not handle any more. Aside from the security issues that were a weakness with Windows, Microsoft now had more reasons to go for a paradigm shift. In 2006, they finally unveiled the first version of what is today, one of the best things Microsoft ever created: Windows PowerShell. They made it available to install on Windows XP (Service Pack 3) and Windows Vista. Windows 7 and all subsequent versions of Windows shipped with PowerShell.

PowerShell managed to address all of the aforementioned problems. Also, PowerShell sat a level closer to the hardware. PowerShell is an extension of the Windows shell itself; it sits right on top of the .NET libraries.

Apart from the proximity to the libraries, here’s why PowerShell is awesome:

Remember less, use logic

PowerShell sounds like English. When we need water, we say, “Could you please get me some water?” Notice the verb, the noun, and the sequence in which they’re placed. If PowerShell ever got the capability of getting water, the command (or cmdlet—pronounce: command-let) would be, Get-Water. This way, we need to remember less. So if I want a list of processes running on my PC, I would just have to say, Get-Process. Can it get any simpler?

Systematic commands

With PowerShell, Microsoft (and the community) introduced the concept of Approved Verbs (type in Get-Verb in the PowerShell window), wherein, you have to choose from a predefined set of verbs to create your cmdlets in PowerShell. So if I wanted PowerShell to get me some water, I could only say, Get-Water, and never Bring-Water. It’s just about a little foresight.

Finding commands simplified

If you want to get a command that kind of, say, sets the date, I can find my command either using the verb or the noun:

Get-Command -Noun 'Date'

Or you could write this to be more specific

Get-Command -Verb 'Set' -Noun 'Date'

Or you could simply take a guess and say,

Get-Command -Name 'Get-Date'

Output was usable

Textual output is great. But only when you only have to read it and not use it programmatically. To use textual output programmatically, you need to perform some level of text manipulation. This was the case with CMD. I still remember the time when we were creating a batch script to get a user’s group membership a certain way. We struggled with text manipulation for a few hours to get the output in the way we wanted. The output that PowerShell gives you, though, is more sensible, and… computable.

What makes it better is that if something isn’t sensible right away, you can make it sensible. For example, run the following command:

Get-ChildItem

You see a column called Length. It wouldn’t make sense to some people right off the bat. While it just means “size”, people who are not very familiar with the technical terminology would not understand it at the first shot. The fix? Just use what is called, Calculated Properties!

Here:

Get-ChildItem | Select-Object Name, @{Name="Size (MB)";Expression={$_.Length / 1MB}}

Go on, run that command on your console and see the result for yourself.

Calculated properties are nothing but values that are manipulated. You can simply tell PowerShell that you want the column renamed as “Size (MB)”. As long as the name is in quotes, you can have spaces if you want, but I’d recommend against it. You’ll understand when we start referring to specific properties from the output. We’ll talk about it in more detail then.

Easier help

Getting help is no more juvenile talk, but a pleasant experience. When you want to get some help, all you say is… That’s right, Get-Help! When you need help with the command to fetch the services running on your PC, you would simply say,

Get-Command -Verb 'Get' -Noun '*service*'

# You would then try to find some help for that specific command:
Get-Help Get-Service

# Not very helpful? Need complete help documentation? No problem:
Get-Help Get-Service -Full

# Still not helpful? Would you like to see an example or two? Here:
Get-Help Get-Service -Examples