3 Easy Steps To Make a GET Request With cURL – OrcaCore
In this guide, we want to teach you How To Make a GET Request With cURL. As you must know cURL is a Linux command line utility that is used to transfer data over the Internet. You can easily use the cURL command to request data and receive your desired information. Here on the Orcacore website, you will learn to make a get request by using the curl command. Understanding how to Make a GET Request With cURL is a valuable skill for any developer or system administrator.
To Make a GET Request With cURL, you must have access to your server as a root or non-root user with sudo privileges. Then, follow the steps below to see how you can make a get request.
The curl command line tool is installed on the Linux distros by default.
1. Making an HTTP Get Request with cURL
You can request data from a web server by using the HTTP Get Request. HTTP GET requests cannot send data to the server in the body of a GET message or change the server’s state. But you can still pass data to the server in URL parameters.
The format to make a get request is like the following:
curl -X GET <URL>
You can replace the URL with your target server and make a get request for it. For example:
curl -X GET https://google.com
The -X GET
option specifies that the HTTP method is GET. When the command runs, curl sends a GET request to the server specified in the URL and waits for the server’s response.
If the server responds with a successful status code, the curl displays the response body in the terminal.
Or you can use the following syntax instead:
curl https://google.com
This command will show you the same results as the above command.

At this point, you can only request a for only HTTP headers instead of getting the body of the request. To do this, you can use the -I
option in the curl command. For example:
curl -I https://google.com

2. Making a JSON Request with cURL
You can receive your data in the format of JSON. To do this, you can use the -H “Accept: application/json”
option in the curl command. For example:
curl https://google.com -H "Accept: application/json"
You can easily Make a GET Request With cURL from a server.
For more information about the curl command and how to download files, you can visit this guide on How To Use the curl command in Linux.
Conclusion
At this point, you have learned to Make an HTTP GET Request, Get HTTP Headers, and JSON request With cURL command. Need any help or suggestions? Please comment for us.
Also, you may like these articles:
Using Fasd in Linux for Quick Access to Files & Directories
Update phpMyAdmin to the Latest Version in Linux
FAQs
How do I follow redirects in a Curl GET request?
You can use the -L flag:curl -L https://example.com
How can I save the GET request response to a file?
You can use the -o flag:curl -o output.html https://example.com
How do I authenticate a GET request with cURL?
Use the -u option for basic authentication:curl -u username:password https://example.com
How do I pass query parameters in a GET request?
You can append them to the URL:curl "https://example.com?param1=value1¶m2=value2"
Alternative Solutions for Making GET Requests
While cURL
is a powerful and widely used tool, there are other methods to perform GET requests, especially when working within a specific programming language or environment. Let’s explore two alternative approaches: using Python’s requests
library and using wget
.
1. Using Python’s requests
Library
The requests
library in Python provides a higher-level, more human-friendly interface for making HTTP requests. It simplifies tasks like handling headers, authentication, and response parsing. To use requests
, you’ll first need to install it:
pip install requests
Then, you can make a GET request like this:
import requests
url = "https://google.com"
try:
response = requests.get(url)
# Check if the request was successful
response.raise_for_status() # Raises HTTPError for bad responses (4xx or 5xx)
# Print the response content
print(response.text)
# Print the headers
print(response.headers)
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
Explanation:
import requests
: Imports therequests
library.url = "https://google.com"
: Defines the URL you want to request.response = requests.get(url)
: Sends a GET request to the specified URL and stores the response in theresponse
variable.response.raise_for_status()
: This is an important step for error handling. It checks the HTTP status code of the response. If the status code indicates an error (e.g., 404 Not Found, 500 Internal Server Error), this method will raise anHTTPError
exception, which can be caught in theexcept
block.print(response.text)
: Prints the HTML content of the response.print(response.headers)
: Prints the HTTP headers of the response.except requests.exceptions.RequestException as e
: This block catches any exceptions that occur during the request process, such as network errors, timeout errors, or invalid URLs. This allows for graceful error handling.
Advantages of using requests
:
- Simpler syntax: The
requests
library provides a more intuitive and readable API compared tocURL
‘s command-line syntax. - Automatic encoding/decoding:
requests
automatically handles encoding and decoding of request and response data, simplifying working with different character sets. - Built-in JSON support:
requests
has built-in methods for encoding and decoding JSON data, making it easier to work with APIs that return JSON responses. You can directly access the JSON data withresponse.json()
. - Integration with Python code: Using
requests
allows you to seamlessly integrate HTTP requests into your Python scripts and applications.
2. Using wget
wget
is another command-line utility, similar to cURL
, used for retrieving content from web servers. While primarily designed for downloading files, it can also be used to perform simple GET requests.
To make a GET request using wget
, simply use the following command:
wget https://google.com
This command will download the HTML content of https://google.com
and save it as a file named index.html
in the current directory. If a file with that name already exists, it will be saved as index.html.1
, index.html.2
, and so on.
To output the content directly to the console instead of saving it to a file, you can use the -q
(quiet) and -O
(output document) options:
wget -q -O - https://google.com
Explanation:
wget
: The command to invoke the wget utility.-q
: Specifies the quiet mode, which suppresses most of the output messages.-O -
: Specifies that the output should be written to standard output (stdout), which is typically the console.https://google.com
: The URL you want to retrieve.
Advantages of using wget
:
- Simple for basic downloads:
wget
is straightforward for downloading files directly from the command line. - Widely available: Like
cURL
,wget
is often pre-installed on many Linux distributions. - Supports recursive downloads:
wget
can recursively download files from a website, following links to other pages.
Disadvantages compared to cURL
and requests
:
- Less flexibility:
wget
offers fewer options for customizing requests compared tocURL
andrequests
. It’s less suited for complex API interactions. - Primarily for downloads: While it can perform GET requests, its main focus is on downloading files.
- No built-in JSON handling:
wget
doesn’t have built-in support for parsing JSON responses.
In conclusion, while cURL
is a versatile tool for Make a GET Request With cURL, Python’s requests
library provides a more programmer-friendly interface within Python environments, and wget
offers a simple alternative for basic downloads and GET requests from the command line. The best choice depends on the specific requirements of your task and the environment you’re working in.