You should remove user-specific details from URLs if you want to get crawled faster by Google search engine. URL parameters that do not change the content of the page, should be removed and put into a cookie. It will deduct the number of URLs that point to the same content as well as speed up crawling.
Infinite spaces are a waste of time and bandwidth for all, therefore you should consider taking action when you have calendars to link to numbers of past or future dates with different URLs.
Advantages you get when Google Crawling Rate increased are:
1) Before other blogger spot any latest technology, you can write on it. Your post will be in Google index with faster Google crawling rate and your blog or website might take up good Serp for that article/ post.
2) You can help your niche site/ blog to get indexed faster and without any delay, results will be instant.
3) You can test various key phrase/ keywords of article you wrote few minutes before to see what your Serp for respective key phrase is/ keywords?
4) You can review any thing and start getting traffic instantly.
Just tell Google to ignore pages which it can not crawl. These pages involves log-in pages, shopping carts, contact forms and other pages that need users to perform actions that crawlers can not perform themselves. You can do this with the robots.txt file.
Avoid duplicate content when possible. Google always like to have one URL for each piece of content. They do recognize that this is not always possible because of content management systems and what have you, that’s why the canonical link element exists there to let you specify the preferred URL for a particular piece of content.
In this way, fast Google crawling increase Google Crawling Rate as you will get number of benefits out of this.