- How can I run a html file from terminal? [closed]
- 9 Answers 9
- Linux Command Line: Parsing HTML with w3m and awk
- Download using curl
- Normalize the HTML
- Extract the table we care about
- Format the HTML
- Grab the columns we we want
- All together
- How can I preview HTML documents from the command line?
- 5 Answers 5
- Better:
- How to access a website using command-line from the Terminal
- Introduction
- Netcat
- Wget
- Curl
- W3M
- Lynx
- Browsh
- Custom HTTP Request
- Conclusion
How can I run a html file from terminal? [closed]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
KIRIM SMS GRATIS
Nomer HP:
Isi Pesan:
as you can see when the file is loaded it automatically clicks the submit button, and redirects it to http://xxxxxx how can I run this html file from terminal? I’m using this on openwrt with webserver installed on it.
Start researching headless browsers. Picking one for you and walking you through installing one up is out-of-scope for this site.
Also, you probably don’t want to do this. You should describe your actual problem for us, because writing an entire HTML document with embedded JavaScript for automatically submitting the form, just to programatically issue POST requests from the command line, is the worst solution to that problem. There’s probably a one-line CURL command that can do this for you.
9 Answers 9
For those like me, who have reached this thread because they want to serve an html file from linux terminal or want to view it using a terminal command, use these steps:-
Navigate to the directory containing the html file
If you have chrome installed,
google-chrome <filename>.html
Navigate to the directory containing the html file
And Simply type the following on the Terminal:-
pushd <filename>.html; python3 -m http.server 9999; popd;
Then click the I.P. address 0.0.0.0:9999 OR localhost:9999 (Whatever is the result after executing the above commands). Or type on the terminal :-
Using the second method, anyone else connected to the same network can also view your file by using the URL: 0.0.0.0:9999
Other users in the network can access said webpage if they navigate to
Linux Command Line: Parsing HTML with w3m and awk
I needed to generate some fake data to simulate transactions. I wanted some valid merchant names to make the data look reasonable. After failing to search the internt for a nice CSV containing merchant names I settled on this Top 100 Retailers Chart 2011. Unfortunatly when you copy and paste the table you get a run together mess.
1 Wal-Mart Bentonville, Ark. $307,736,000 0.6% $421,886,000 72.9% 4,358 1.3% 2 Kroger Cincinnati $78,326,000 6.4% $78,326,000 100.0% 3,609 -0.4% 3 Target Minneapolis $65,815,000 3.8% $65,815,000 100.0% 1,750 0.6% 4 Walgreen Deerfield, Ill. $61,240,000 6.3% $63,038,000 97.1% 7,456 8.1% 5 The Home Depot Atlanta $60,194,000 2.2% $68,000,000 88.5% 1,966 0.0% — See more at: https://nrf.com/resources/top-retailers-list/top-100-retailers-2011#sthash.RUUwpfm0.dpuf
Download using curl
This is the easiest part and most Linux systems will have this installed by default.
curl -s https://nrf.com/resources/top-retailers-list/top-100-retailers-2011
-s will tell curl to be silent with it’s messages. The output will go to standard output.
Normalize the HTML
Before we extract content from the HTML we need it to be normalized. To do this we can use hxnormalize by w3.org in their HTML-XML-utils package.
-x will tell hxnormalize to output XHTML.
Extract the table we care about
Now we need only the content we care about. HTML-XML-utils package has a tool for this as well hxselect .
‘table.views-table’ tells hxselect to extract all table with a CSS selector of views-table .
Format the HTML
w3m is a command line text based web browser. It can also just dump formatted HTML to standard out which is what I used it for.
. | w3m -dump -cols 2000 -T 'text/html'
-dump tells w3m to write it’s output to standard out as opposed to a scrollable viewer. -cols 2000 ensures we don’t have wrapping of the lines which would make parsing more tedious. -T ‘text/html tells w3m that the input should be treated as HTML.
Grab the columns we we want
Finally we need to grab only the first column. awk will help with that.
Lets break down the awk script a little. BEGIN <. >is used to run something before we start processing data. In this case FIELDWIDTHS=»5 29″ tells awk that the first 5 columns are field 1 and the next 29 columns are field 2 and the remaining columns are field 3.
The second part of the awk script is what will run on each line. The first two gsub statements will trim the start and end of the respectively. Finally print $2 will print the 2nd column which in our case is the company name.
All together
Here is the final full command to run.
curl -s https://nrf.com/resources/top-retailers-list/top-100-retailers-2011 \ | hxnormalize -x \ | hxselect -s '\n' 'table.views-table' \ | w3m -dump -cols 2000 -T 'text/html' \ | awk 'BEGIN'
How can I preview HTML documents from the command line?
I use catdoc to preview Word documents from the command line. Is there something similar for HTML? In other words I would like to do cathtml Webpage.html | less and get more or less a man page look output.
5 Answers 5
One solution is to use lynx -dump , as in
echo "this is a div" | lynx -dump -stdin
Better:
Use w3m -dump , it respects the page layout. It is awesome.
sudo aptitude install w3m w3m -dump file.html echo "x" | w3m -dump -T text/html
You can set the number of columns of the terminal for getting the better layout, for example, if you are using a full-window terminal, 200 can be a good try:
w3m -dump -cols 200 file.html
You can use html2text to look into an HTML document from command line.
sudo apt-get install html2text html2text Webpage.html | less
but not exactly a look similar to man pages, it depends on the html input, of course.
You could use w3m which should already installed — if however it isn’t
An example command syntax:
w3m -dump askubuntu.com | less
It can be used with both URL’s as well as file-based html files
There are these so-called text html browsers, of which I have successfully tested Lynks and eLinks. Of which elinks has became my favorite.
Anyway, they are different than a «web page previewers» as you suggest in your question. They are more like active real time browsers, which carries on lots of useful features for an easy web browsing and of course, you can also use it for local/remote web pages preview.
Information about Lynks is in the wikipedia and can be reached clicking here. The information related to the elinks is here.
My elinks screenshot for you to see elinks in action:
How to access a website using command-line from the Terminal
P enetration Testing Process often involves dealing with a variety of tools. These tools may be Command-Line based, GUI Based, readily available, and sometimes the pen tester may have to automate a set of commands.
Introduction
GUI is not often possible, and you are not expected to rely on GUI/Gnome-based tools for testing and exploitation practices. Suppose you have gained a shell on some machine and want to download an exploit, then the web browser is not available. In this tutorial, we will take a look at different tools that are helpful while browsing the website using the command line from the terminal.
Netcat
Netcat is a Swiss army knife for hackers, and It gives you a range of options to make your way through the exploitation phase.
Following is how to access a webpage using the GET method with netcat.
$ nc www.google.com 80
GET / HTTP/1.1
Host:www.google.com
To access any content like video, you can enter the following;
$ nc www.example.com 80
GET /VIDEO HTTP/1.1
Host:www.example.com
You can change the ways to POST, OPTIONS, CONNECT as per your requirements. With HTTP/1.1, the connection doesn’t close after one request. To close the connection, enter;
Alternatively, you can also use the following while accessing the webpage;
$ nc www.google.com 80
GET / HTTP/1.1
Host:www.google.com
Connection: close
The above commands will close the connection automatically after the page has been fetched from the webserver.
Wget
wget is another commonly used tool to access the webpage. You can use it to download anything placed on a particular web server.
Curl
Curl is another powerful tool used to access the webpages in the command line environment. Enter the following command;
W3M
w3m is a CLI-based web browser. It lets you view the page source and access the webpage as if you were accessing it in any GUI browser.
You can install it by the following command;
To access a webpage, enter;
Lynx
Another useful command-line tool is lynx. You can install it by entering;
$ sudo apt install lynx
To access a webpage, enter;
Browsh
Another handy text-based browser is browsh. It is still under construction. You can use it by initiating the ssh connection by;
Then you can press CTRL+l to focus on the URL bar. Enter your query, and the Browsh will use Google as a search engine to search and output you the results on the command line.
Here’s an output of the weather query using the Browsh.
Custom HTTP Request
You can also craft your custom HTTP request by entering the following command;
printf «GET /\r\nHost: google.com\r\n\r\n» | netcat google.com 80
The HTTPs request will look like the following;
printf «GET /\r\nHost: google.com\r\n\r\n» | socat — OPENSSL:google.com 443
Conclusion
We have various tools available to access the web pages from the terminal. The terminal also gives us the ability to customize the requests, giving us enhanced capabilities. During exploitation, a pen tester must have some of these tools in the pocket.