You are viewing a read-only archive of the Blogs.Harvard network. Learn more.
Skip to content

wget recon technique

I was looking for a novel way to recon a network for webservers and came up with a command line combination involving wget and find. The first stage is to use wget and download the index page of any server that responds. The second stage is to remove all the zero length files that will be written for non responsive but active IP addresses.

WGET STAGE
If you are assigned to scout a network range from 192.168.1.1 – 192.168.1.255 you can use a for loop and wget to quickly download index pages. Obviously this technique could be adapted for larger ranges but in this published form is best for Class C only.

for i in `seq 1 255`
do
wget -O 192.168.1.$i.html 192.168.1.$i &
done

Expanding the parameters of the wget command we see that -O is used to write an ouput file with a specific name. Otherwise we will have filename collisions all over the place and more importantly we will have no idea what the originating server is. The & is used to put the process into the background and acts as a cheap form of parallel tasking. All of the requests will launch at the same time. Since we are limiting ourselves to a class C we won’t worry about overloading the machine.

ZERO LENGTH FILE STAGE
The resulting files will either have html in them or have a zero length. The zero length files will occur when the ip address is alive but there is no web server there to respond. To clean these we use a clever technique for discovering these files using the find command.

for i in `find . -empty -exec ls {} \;`
do
rm $i
done

What is left is html code saved with a fliename of the ip address where it was found.

Post a Comment

You must be logged in to post a comment.