I am fairly new in the world of async, and I was reading about asyncio before coming across curio and deciding to use it.
My goal is to create a small app that web scrape specific URLs every n seconds, and of course write the result into a db. Previously, I used threads and threads timeout before someone in #python@freenode suggested to use asyncio.
Curio looks very promising and neat, but what is the best way to scrape? using sockets like the docs say? can I use something like aiohttp or request?