{
"success": true,
"data": {
"query": "web scraping best practices",
"results": [
{
"position": 1,
"title": "Web Scraping Best Practices Guide",
"url": "https://example.com/best-practices",
"description": "Learn the essential best practices for ethical and effective web scraping...",
"displayLink": "example.com",
"markdown": "# Web Scraping Best Practices\n\n## Introduction\nWeb scraping is a powerful technique for extracting data from websites. However, it must be done responsibly to respect website resources and legal boundaries.\n\n## Key Principles\n\n1. **Respect robots.txt** - Always check and follow robots.txt directives\n2. **Use rate limiting** - Avoid overwhelming servers with too many requests\n3. **Identify yourself** - Use descriptive user agents\n4. **Handle errors gracefully** - Implement retry logic with exponential backoff\n5. **Cache responses** - Minimize duplicate requests\n\n## Technical Implementation\n\nWhen implementing web scrapers, consider...",
"content": "# Web Scraping Best Practices\n\n## Introduction\nWeb scraping is a powerful technique for extracting data from websites. However, it must be done responsibly to respect website resources and legal boundaries.\n\n## Key Principles\n\n1. **Respect robots.txt** - Always check and follow robots.txt directives\n2. **Use rate limiting** - Avoid overwhelming servers with too many requests\n3. **Identify yourself** - Use descriptive user agents\n4. **Handle errors gracefully** - Implement retry logic with exponential backoff\n5. **Cache responses** - Minimize duplicate requests\n\n## Technical Implementation\n\nWhen implementing web scrapers, consider...",
"preview": "Web scraping is a powerful technique for extracting data from websites. However, it must be done responsibly to respect website resources and legal boundaries. Key principles include respecting robots.txt directives, using rate...",
"summary": "Web scraping is a powerful technique for extracting data from websites. However, it must be done responsibly to respect website resources and legal boundaries. Always check and follow robots.txt directives, use rate limiting to avoid overwhelming servers, and identify yourself with descriptive user agents.",
"scrapedAt": "2025-01-15T10:30:00Z",
"scrapeStatus": "success",
"wordCount": 1542,
"readingTime": 7,
"metadata": {
"statusCode": 200,
"contentQuality": "high"
}
},
{
"position": 2,
"title": "Ethical Web Scraping Guidelines",
"url": "https://another.com/ethics",
"description": "Guidelines for responsible web scraping practices...",
"displayLink": "another.com",
"markdown": "# Ethical Web Scraping\n\nWhen scraping websites, it's crucial to follow ethical guidelines that protect both the scraper and the website owner. This includes respecting terms of service, avoiding personal data collection without consent, and being transparent about your scraping activities.",
"content": "# Ethical Web Scraping\n\nWhen scraping websites, it's crucial to follow ethical guidelines that protect both the scraper and the website owner. This includes respecting terms of service, avoiding personal data collection without consent, and being transparent about your scraping activities.",
"preview": "When scraping websites, it's crucial to follow ethical guidelines that protect both the scraper and the website owner. This includes respecting terms of service, avoiding personal data collection without consent, and being...",
"summary": "When scraping websites, it's crucial to follow ethical guidelines that protect both the scraper and the website owner. This includes respecting terms of service, avoiding personal data collection without consent, and being transparent about your scraping activities.",
"scrapedAt": "2025-01-15T10:30:05Z",
"scrapeStatus": "success",
"wordCount": 987,
"readingTime": 5,
"metadata": {
"statusCode": 200,
"contentQuality": "high"
}
}
],
"totalResults": 12400,
"searchTime": 8.5,
"creditsUsed": 6
}
}