Basics on Linux: Top Linux Commands – Part 2

Are you ready to take your Linux skills to the next level? Welcome back to the “Journey with Linux” series, where we dive deep into the core concepts of this open-source operating system. In this write-up, we’ll explore advanced Linux commands that are essential for any IT professional looking to upskill and take their Linux expertise to the next level.

In part 2 of our “Basics on Linux: Top Linux Commands – Part 2” series of Linux Fundamental – A Journey with Linux continues of Top Linux Commands – Part 1, we’ll cover a range of advanced commands that will help you troubleshoot problems, optimize system performance, and streamline your workflows. From process management with ps and top to network troubleshooting with netstat and tcpdump, we’ll cover everything you need to know to become a Linux power user.

Our expert guides will provide detailed explanations of each command, including its syntax, use cases, and practical examples. We’ll also explore some lesser-known Linux commands that can be invaluable tools in your IT arsenal, such as awk, sed, and xargs.

So, whether you’re a seasoned Linux pro or just starting on your journey with Linux, this write-up is a must-read. Join us as we explore the top advanced Linux commands and take your Linux skills to the next level!

1. pipe: The pipe in Linux connects the output of one command to the input of another command for chaining multiple commands to create more complex workflows. It is represented by “|” symbol.

2. less: It’s a pager utility that allows you to view the contents of a file or command output one page at a time. It is often used to view large files, such as log files, and to navigate through long command output.

you can use the q key to exit from the window and return to the terminal.

3. tee: It reads from standard input and writes to both standard output and files specified, allowing the user to view and store the output of a command simultaneously.

It will give the standard output of ls -lah and also store in the file longlist.txt

We can also store the standard output in the same file without overwriting any existing contents by using -a option.

4. cut: used to extract sections or columns of data from a file or input stream based on a specified delimiter, and can also be used to slice strings, making it a useful tool for working with large datasets.

Here name and age is there in the file.txt, by using cut command will just see the output of first column.

-d “,” is the delimiter used for specify the comma and -f 1 specifies that we want to extract the first field or column.

What is delimiter?

In a file or stream of data, there may be different pieces of information that are separated by a special character, such as a comma, a tab, or a semicolon. This special character is known as the delimiter.

So now, will try the second column of the file.

by using cut command we can also slice the strings by specifying the starting and ending positions of the characters to extract.

This will output the first five characters of the input string:

As we’ve tried the with column but for the extracting the row by using cut is not possible so here will try with head command, head is used to display first few line and -n indicates as number of lines to display by pipe ” | ” will be able to chain the multiple commands.

5. head: used to display the first few lines of a file.

6. tail: It allows you to view the last few lines of a file or a stream of data. Its most common usage is to monitor log files in real-time as new data is being appended to the end of the file.

The command head -n 5 newone.txt | tail -n 3 will print the last 3 lines of the first 5 lines of the file “newone.txt”.

head -n 5 newone.txt‘ – This command will print the first 5 lines of the file “file.txt” to the standard output.

tail -n 3‘ – This command will take the last 3 lines of the output from the previous command and print them to the standard output.

So, the combination of these two commands will print the last 3 lines of the first 5 lines of the file “newone.txt“.

7. nl: It adds line numbers to a file. It reads a file or standard input and writes a copy of it to standard output with line numbers added to each line.

8. sort: used to sort lines of text files in alphabetical or numerical order. It reads input either from standard input or from a list of files and sorts the lines and writes the result to standard output.

We can sort the line in alphabetical order

by using ‘-r‘ we can reverse the alphabetical order.

-u‘ is used to sort the file and remove the duplicates.

We can save the standard output into the file by using ‘>‘ and by using ‘>>‘ to append the output without overwriting the file.

We can also sort a file into numerical order if it contains a number by the ‘-n‘ option.

9. uniq: used to find and remove duplicate lines from a file or a stream of data.

By the above output, we can remove the duplicate line and display the output.

For the count of duplicates in order will use ‘-c‘.

As we see the place of shruti it was showing as 2 and mohan as the same 2 because in the same order, there were 2 duplicates.

So will add a few more duplicates to understand the concepts, and check it out from the below output.

Now will try with ‘-s‘ to ignore the character and give output without duplicates.


assume that the lines ‘abcd123‘ and ‘john123‘ should be considered duplicates and we want to ignore the first 4 characters of each line when comparing for duplicates, then yes, we can use the ‘s‘ option with a value of 4 to achieve this.

As we can see, the lines abcd123 and john123 are now considered duplicates because we ignored the first 4 characters of each line.

10. wc: The wc(word count) command in Linux is a utility that is used to count the number of lines, words, and characters in a file or a stream of data.

The ‘wc‘ command is used for the counting of lines, words, and characters in the file, where it was having 14 lines, 14 words, and 90 characters.

-l‘ is used for counting the number of lines in the file.

-w‘ is used for counting the words in the file.

-c‘ is used for counting characters in the file.

In a stream of data by using echo, we can also count the line, words, and characters of this passing data.

11. file: used to determine the type of a file by examining its contents.

This will print out the type of newone2.txt, which may include information such as the file format, the encoding, and any other relevant metadata.

12. grep: The grep command in Linux is a powerful utility that is used to search for a pattern in a file or a stream of data.

now we’ve searched for the ‘how‘ word in the file, where it will print the complete line of that pattern.

Here, didn’t display the output of that because the first character was upper case and we can case-insensitive option to display.

by using ‘-v‘ to display the lines which will not match with patterns.

13. ps: used to display information about running processes on a system.

-a‘ used to display information about all processes, including those not associated with a terminal.

-u‘ is used to display detailed information about each process, including the user who owns the process, the amount of system resources used by the process, and the command used to launch the process.

-x‘ is used to display information about all processes, including those not associated with a terminal, but without the column headers.

-e‘ is used to display information about all processes, including their environment variables.

-f‘ is used to display a full-format listing of processes, including the process tree hierarchy.

Display a list of all running processes:

This will display a list of all running processes on the system, along with detailed information about each process.

Display a list of all processes owned by a specific user:

This will display a list of all processes owned by the username, along with detailed information about each process.

14. top: used to monitor the system’s resource usage in real-time. It displays information about the processes that are currently running, the system’s CPU usage, memory usage, and more.

To sort processes by memory usage, press the ‘M‘ key. You will see the processes sorted by memory usage, with the highest memory usage at the top.

You can use the ‘p‘ option followed by the process ID to monitor a specific process.

15. netstat: It is used to display information about network connections and network statistics.

So here will use the option ‘-antp’ to filter the output of netstat, this option indicates as

The ‘a‘ option displays all network connections, both listening and non-listening. The ‘n‘ option displays numeric values instead of host and service names. The ‘t‘ option displays TCP connections only. Finally, the ‘p‘ option displays the process ID and name of the program that is using the network connection.

Will try other options like filtering the output for the exact data of netstat of certain processes, here run the apache2 server.

To see the output of Apache2 with the process ID, you can use the netstat command with the ‘-antp‘ options. However, if you want to filter the output to only show Apache2 connections, you can pass the output of netstat to the grep command using a pipe.

16. tcpdump: a command-line tool used for capturing and analyzing network traffic on Linux and other Unix-like operating systems. It can be used to capture and display packets on a network interface, or to save captured packets to a file for later analysis.

This command captures packets on the specified network interface and displays them in real time. The ‘-i‘ option is used to specify the interface to capture packets on, so have used the ‘wlan0‘ of the wireless network interface used for connecting to a Wi-Fi network on Linux.

To write the captured packets into a file can use options ‘-w’ and the path with an output filename.

To display the output in the human-readable format can use the option, ‘-n’ is used the display the IP addresses and port numbers in numeric format, and the ‘-v‘ option is used to display more verbose output which means will provide more additional details or information beyond the basic output.

17. awk: It is a powerful text processing tool used in Unix and Linux operating systems. It is designed for pattern scanning and processing language and is particularly useful for working with structured data files, such as comma-separated value (CSV) files.

The basic syntax of the command follows: awk ‘pattern { action }’ filename


  • pattern is a regular expression pattern that defines the text to be searched for.
  • action is the command to be executed when a match is found.
  • filename is the name of the file to be processed.

The output of the secret.txt file
  1. Print the field of each line of the file:

2. Print the first and second fields of each line of the file:

3. Print the lines that contain the word “Razz”:

4. Print the number of lines of the file.

NR‘ is a variable in awk is stands for ‘Number of Records ‘ and END is a keyword that specifies that the print action should be performed only after all lines in the file have been processed, So the NR will have the total number of records and by ‘print NR‘ will be print the output of the total lines of the file.

18. sed: It is a stream editor in Unix and Linux operating systems that can be used to manipulate text files. It allows you to perform various text transformation tasks, such as substitution, deletion, insertion, and more.


  • OPTIONS: These are optional parameters that modify the behavior of the sed command. Some commonly used options include -n (suppress default output), -i (edit files in place), and -e (specify multiple commands).
  • COMMAND: This is the sed command to be executed on the file(s) specified. There are several commands available, including s (substitute), d (delete), a (append), i (insert), and more.
  • FILE: This is the name of the file(s) to be processed by the sed command. If no file is specified, sed will read from standard input (stdin).
  1. Substitute all occurrences of the word ‘how‘ replace with the ‘where‘ in the file story.txt, ‘s’ indicates substitution operations, and ‘g’ is for global (means substitution should be global that all occurrences to every line replace the how to where.

2. ‘d‘ indicates as delete the line of a matched pattern of ‘where‘.

But for editing in a place without giving the output, use ‘-i‘.

3. Adding a new line after matching with the pattern ‘where‘ in each line of the file and ‘a‘ indicating as append.

19. xargs: It is used to take standard input and convert it into command-line arguments. It reads items from standard input (stdin), separated by whitespace or a specified delimiter, and passes them as arguments to a specified command.

the basic syntax of xargs is a ‘command | xargs [options] command2‘.

To run a command in each line, here by using options ‘-n 1‘ to process one word at a time of each line by using the xargs command.

For processing each line two words as a set of characters will use ‘-n 2’ as an option.

20. wget: It is used to download files from the Internet. It can download files using HTTP, HTTPS, and FTP protocols, and can also work with proxies.

The basic syntax of the wget command is: wget [options] URL

So now will try to download sample files by using the wget command.

Once its download successfully, it will be saved into the current directory. Here we can have a look by using the cat command.

We can save the file with a different name and path, by using the ‘-O‘ option.

To download a file in the background it is indicated as ‘-b‘, So it will be running in the background rather than showing progress.

I hope you’ve learned and enjoyed this Walkthrough.

So, You can connect with me on LinkedIn & Twitter for more updates on Infosec.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.