Hi, I've been looking on a website that uses Ajax requests to update multiple timers a couple of times a second, resulting in probably 10 to 15 ajax requests per second. I was trying to write a program in python that would access the same url that the Ajax is requesting and save the data coming in. I am using httplib for the actual requests, maintaining persistent connections, getting data about twice a second. I am also using multithreading to run multiple requests at once. Each one looks like this:
The program works fine for about 20-30 min, then I get a timeout error and the sever I am connecting to blocks me out (aka, even firefox can't get to it, but I can browse all other websites). I used to use urllib to access the data, but it would time out way faster (I thought it was because it created too many tcp connections, so I switched to persistent ones with httplib). However, if I open the website in firefox, it will run for hours, continually updating timers without blocking me. I was wondering what I am doing wrong and how Ajax.request is different from httplib.HTTPConnection.request()
Code:
import httplib,time
web_access = httplib.HTTPConnection("www.website.com")
while True:
web_access.request("/my_url")
response = web_access.getresponse().read()
time.sleep(.5)
## do stuff with response
The program works fine for about 20-30 min, then I get a timeout error and the sever I am connecting to blocks me out (aka, even firefox can't get to it, but I can browse all other websites). I used to use urllib to access the data, but it would time out way faster (I thought it was because it created too many tcp connections, so I switched to persistent ones with httplib). However, if I open the website in firefox, it will run for hours, continually updating timers without blocking me. I was wondering what I am doing wrong and how Ajax.request is different from httplib.HTTPConnection.request()