Best Web Browsing on AlmaLinux Terminal with Examples
This guide aims to illustrate how to achieve Best Web Browsing on AlmaLinux Terminal using command-line tools like Lynx, w3m, links, and Browsh. These tools are invaluable for browsing websites and downloading files directly from your Linux shell, especially in situations where a graphical interface is unavailable or resource-intensive.
There are several terminal-based web browsing tools available for Linux systems. Lynx, w3m, links, and Browsh stand out as popular options, each offering a unique approach to accessing web content within the terminal environment.
Let’s delve into the process of using these Linux tools for Best Web Browsing on AlmaLinux Terminal, following the steps outlined below with Orcacore’s website as a practical example.
Before proceeding, ensure you have access to your AlmaLinux server as a non-root user with sudo privileges. If you need assistance with this, refer to these initial server setup guides:
Number 1 – Lynx Web Browsing Tool
Lynx is a venerable text-based web browser designed for command-line use. It excels in environments where a graphical interface is absent.
To install Lynx on your AlmaLinux system, execute the following command in your terminal:
sudo dnf install lynx -y
Once the installation is complete, you can browse websites. For instance, to view the Orcacore website, use the following command:
lynx www.orcacore.com
The output will present a text-based representation of the website within your terminal, resembling the image below:

Navigate using the Up and Down arrow keys. To exit Lynx, press the ‘Q’ key.
Number 2 – w3m Web Browsing Tool
w3m is another powerful text-based web browser, conceptually similar to Lynx. To install it on your AlmaLinux server, use these commands:
sudo dnf install epel-release -y
sudo dnf install w3m -y
Now, you can browse websites using w3m. For example:
w3m www.orcacore.com
Use the Up and Down arrow keys to navigate, and press Enter to follow links. Press ‘Q’ to exit w3m.
Number 3 – Links Web Browsing Tool
Links provides a text-based browser interface for web browsing from the terminal. Install it on your AlmaLinux server using:
sudo dnf install links -y
After installation, you can start browsing. For example:
links www.orcacore.com
Press OK and use the UP and DOWN arrow keys to navigate. To QUIT, press ‘q’.
Number 4 – Browsh Browser on AlmaLinux
Browsh is a modern command-line browser designed for speed. To install it, download the package from the official site. Use the following commands:
sudo curl -o browsh.rpm -L https://github.com/browsh-org/browsh/releases/download/v1.8.0/browsh_1.8.0_linux_amd64.rpm
sudo rpm -Uvh ./browsh.rpm
sudo rm ./browsh.rpm
Note: Browsh requires Firefox to be installed. Install Firefox with:
sudo dnf install firefox -y
Then, start Browsh:
browsh
Conclusion
This guide has demonstrated how to perform Best Web Browsing on AlmaLinux Terminal using Lynx, w3m, links, and Browsh.
Let’s consider alternative approaches to web content retrieval on the AlmaLinux terminal, beyond full-fledged text-based browsers. The goal is still the same: accessing and potentially downloading web content.
Alternative 1: Using curl
or wget
for Content Retrieval
Instead of browsing interactively, you can use curl
or wget
to directly retrieve the HTML content of a webpage. This is useful when you need to parse the content programmatically or simply save it for offline viewing.
-
curl
: A command-line tool to transfer data with URLs.Example:
curl -s www.orcacore.com > orcacore.html
This command retrieves the HTML source code of the Orcacore website and saves it to a file named
orcacore.html
. The-s
option makes curl silent, suppressing progress meter and error messages. You can then use tools likegrep
,sed
, orawk
to extract specific information from the downloaded HTML. -
wget
: A command-line utility for retrieving files using HTTP, HTTPS, and FTP. It’s particularly useful for downloading multiple files or mirroring entire websites.Example:
wget -qO- www.orcacore.com
This command retrieves the HTML content of Orcacore’s website and prints it to standard output. The
-q
option makes wget quiet, and-O-
tells it to output the content to standard output.To download a file:
wget https://example.com/document.pdf
This downloads the PDF file
document.pdf
from the specified URL.
Advantages:
- Lightweight and fast.
- Ideal for scripting and automation.
- No interactive interface, making it suitable for background processes.
Disadvantages:
- Requires knowledge of HTML to extract meaningful information.
- Does not render the webpage, so you don’t see it as a user would.
- May not handle JavaScript or complex website layouts correctly.
Alternative 2: Using Python with requests
and BeautifulSoup4
This approach combines the power of Python with libraries designed for web scraping and parsing.
-
requests
: A Python library for making HTTP requests. -
BeautifulSoup4
: A Python library for parsing HTML and XML documents.
First, you need to install these libraries:
sudo dnf install python3
pip3 install requests beautifulsoup4
Then, you can write a Python script to retrieve and parse web content:
import requests
from bs4 import BeautifulSoup
url = "https://www.orcacore.com"
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
# Example: Extract all the links from the page
for link in soup.find_all('a'):
print(link.get('href'))
else:
print(f"Failed to retrieve the page. Status code: {response.status_code}")
Save this script as web_scraper.py
and run it using python3 web_scraper.py
. The script fetches the HTML content of the Orcacore website, parses it using BeautifulSoup, and then extracts and prints all the links found on the page.
Advantages:
- More robust parsing of HTML compared to simple command-line tools.
- Can handle JavaScript-rendered content with additional libraries like Selenium (more complex setup).
- Python’s versatility allows for complex data manipulation and analysis.
Disadvantages:
- Requires Python and the necessary libraries to be installed.
- More complex setup compared to
curl
orwget
. - Websites can change their structure, breaking the scraper. Requires maintenance.
In summary, while Lynx, w3m, links, and Browsh offer interactive Best Web Browsing on AlmaLinux Terminal, curl
/wget
and Python with requests
and BeautifulSoup4
provide alternative methods for retrieving and processing web content, each with its own strengths and weaknesses. Choose the approach that best suits your needs and technical expertise.