# This is the robots.txt file for influentialpoints.com # InfluentialPoints does not permit the use of our content for large language models. Sitemap: https://influentialpoints.com/google_sitemap.xml # Block all known AI crawlers and assistants # from using content for training AI models. # Source: https://robotstxt.com/ai User-Agent: GPTBot User-Agent: ClaudeBot User-Agent: Claude-User User-Agent: Claude-SearchBot User-Agent: CCBot User-Agent: Google-Extended User-Agent: Applebot-Extended User-Agent: Facebookbot User-Agent: Meta-ExternalAgent User-Agent: Meta-ExternalFetcher User-Agent: diffbot User-Agent: PerplexityBot User-Agent: Perplexity-User User-agent: IbouBot User-Agent: Omgili User-Agent: Omgilibot User-Agent: webzio-extended User-Agent: ImagesiftBot User-Agent: Bytespider User-Agent: TikTokSpider User-Agent: Amazonbot User-agent: amazon-kendra User-Agent: Youbot User-Agent: SemrushBot-OCOB User-Agent: Petalbot User-Agent: VelenPublicWebCrawler User-Agent: TurnitinBot User-Agent: Timpibot User-Agent: OAI-SearchBot User-Agent: ICC-Crawler User-Agent: AI2Bot User-Agent: AI2Bot-Dolma User-Agent: DataForSeoBot User-Agent: AwarioBot User-Agent: AwarioSmartBot User-Agent: AwarioRssBot User-Agent: Google-CloudVertexBot User-Agent: PanguBot User-Agent: Kangaroo Bot User-Agent: Sentibot User-Agent: img2dataset User-Agent: Meltwater User-Agent: Seekr User-Agent: peer39_crawler User-Agent: cohere-ai User-Agent: cohere-training-data-crawler User-Agent: DuckAssistBot User-Agent: Scrapy User-Agent: Cotoyogi User-Agent: aiHitBot User-Agent: Factset_spyderbot User-Agent: FirecrawlAgent Disallow: / DisallowAITraining: / # Block any non-specified AI crawlers (e.g., new # or unknown bots) from using content for training # AI models. This directive is still experimental # and may not be supported by all AI crawlers. User-Agent: * DisallowAITraining: / User-Agent: * Content-Usage: ai=n Allow: / User-agent: ImagesiftBot Disallow: / User-agent: ChatGPT-User Disallow: / User-agent: anthropic-ai Disallow: / User-agent: cohere-ai Disallow: / User-agent: * Disallow: /course/i/ Disallow: /course/x/ Disallow: /Downloads/ Disallow: /downloads/ Disallow: /Guests/ Disallow: /dbb/ Allow: / User-agent: Googlebot Disallow: /course/ User-agent: AdsBot-Google Disallow: /course/ User-agent: Googlebot-Image Disallow: /course/ User-agent: MegaIndex.ru Disallow: / User-agent: MegaIndex Disallow: / User-agent: BLEXBot Disallow: / User-agent: AhrefsBot Disallow: / User-agent: Barkrowler Disallow: / User-agent: SemrushBot Disallow: / User-agent: Orthogaffe Disallow: / User-Agent: The Knowledge AI Disallow: / # Crawlers that are kind enough to obey, but which we'd rather not have # unless they're feeding search engines. User-agent: UbiCrawler Disallow: / User-agent: DOC Disallow: / User-agent: Zao Disallow: / User-agent: Twiceler Disallow: / # Some bots are known to be trouble, particularly those designed to copy # entire sites or download them for offline viewing. Please obey robots.txt. # User-agent: sitecheck.internetseer.com Disallow: / User-agent: Zealbot Disallow: / User-agent: MSIECrawler Disallow: / User-agent: SiteSnagger Disallow: / User-agent: WebStripper Disallow: / User-agent: WebCopier Disallow: / User-agent: Fetch Disallow: / User-agent: Offline Explorer Disallow: / User-agent: Teleport Disallow: / User-agent: TeleportPro Disallow: / User-agent: WebZIP Disallow: / User-agent: linko Disallow: / User-agent: HTTrack Disallow: / User-agent: Microsoft.URL.Control Disallow: / User-agent: Xenu Disallow: / User-agent: larbin Disallow: / User-agent: libwww Disallow: / User-agent: ZyBORG Disallow: / User-agent: Download Ninja Disallow: / User-agent: Nutch Disallow: / User-agent: spock Disallow: / User-agent: OmniExplorer_Bot Disallow: / User-agent: TurnitinBot Disallow: / User-agent: BecomeBot Disallow: / User-agent: genieBot Disallow: / User-agent: dotbot Disallow: / User-agent: MLBot Disallow: / User-agent: 80bot Disallow: / User-agent: Linguee Bot Disallow: / User-agent: aiHitBot Disallow: / User-agent: Exabot Disallow: / User-agent: SBIder/Nutch Disallow: / User-agent: Jyxobot Disallow: / User-agent: mAgent Disallow: / User-agent: MJ12bot Disallow: / User-agent: Speedy Spider Disallow: / User-agent: ShopWiki Disallow: / User-agent: Huasai Disallow: / User-agent: DataCha0s Disallow: / User-agent: Baiduspider Disallow: / User-agent: Atomic_Email_Hunter Disallow: / User-agent: Mp3Bot Disallow: / User-agent: WinHttp Disallow: / User-agent: betaBot Disallow: / User-agent: core-project Disallow: / User-agent: panscient.com Disallow: / User-agent: Java Disallow: / User-agent: libwww-perl Disallow: / # Sorry, wget in its recursive mode is a frequent problem. # User-agent: wget Disallow: / # # A capture bot, downloads gazillions of pages with no public benefit # http://www.webreaper.net/ User-agent: WebReaper Disallow: / # Poorly behaved Crawlers, some of these ignore # robots.txt but... # # The 'grub' distributed client has been *very* poorly behaved. User-agent: grub-client Disallow: / User-agent: k2spider Disallow: / # Hits many times per second, not acceptable # http://www.nameprotect.com/botinfo.html User-agent: NPBot Disallow: /