Skip to content

era-satish/new

Repository files navigation

Documentation Status http://pepy.tech/badge/advertools

🎉 New: crawl_headers Function for crawling a known list of URLs with the HEAD method only
🎊 New: SEO crawler has new options for following links, include/exclude URL params and/or URL regex.
🎉 New: reverse_dns_lookup Function for getting host information on a list of IP addresses

advertools: productivity & analysis tools to scale your online marketing

A digital marketer is a data scientist.
Your job is to manage, manipulate, visualize, communicate, understand, and make decisions based on data.

You might be doing basic stuff, like copying and pasting text on spread sheets, you might be running large scale automated platforms with sophisticated algorithms, or somewhere in between. In any case your job is all about working with data.

As a data scientist you don't spend most of your time producing cool visualizations or finding great insights. The majority of your time is spent wrangling with URLs, figuring out how to stitch together two tables, hoping that the dates, won't break, without you knowing, or trying to generate the next 124,538 keywords for an upcoming campaign, by the end of the week!

advertools is a Python package that can hopefully make that part of your job a little easier.

Installation

pip install advertools
# OR:
pip3 install advertools

SEM Campaigns

The most important thing to achieve in SEM is a proper mapping between the three main elements of a search campaign

Keywords (the intention) -> Ads (your promise) -> Landing Pages (your delivery of the promise) Once you have this done, you can focus on management and analysis. More importantly, once you know that you can set this up in an easy way, you know you can focus on more strategic issues. In practical terms you need two main tables to get started:

SEO

Probably the most comprehensive online marketing area that is both technical (crawling, indexing, rendering, redirects, etc.) and non-technical (content creation, link building, outreach, etc.). Here are some tools that can help with your SEO

  • SEO crawler: A generic SEO crawler that can be customized, built with Scrapy, & with several features:
    • Standard SEO elements extracted by default (title, header tags, body text, status code, reponse and request headers, etc.)
    • CSS and XPath selectors: You probably have more specific needs in mind, so you can easily pass any selectors to be extracted in addition to the standard elements being extracted
    • Custom settings: full access to Scrapy's settings, allowing you to better control the crawling behavior (set custom headers, user agent, stop spider after x pages, seconds, megabytes, save crawl logs, run jobs at intervals where you can stop and resume your crawls, which is ideal for large crawls or for continuous monitoring, and many more options)
    • Following links: option to only crawl a set of specified pages or to follow and discover all pages through links
  • robots.txt downloader A simple downloader of robots.txt files in a DataFrame format, so you can keep track of changes across crawls if any, and check the rules, sitemaps, etc.
  • XML Sitemaps downloader / parser An essential part of any SEO analysis is to check XML sitemaps. This is a simple function with which you can download one or more sitemaps (by providing the URL for a robots.txt file, a sitemap file, or a sitemap index
  • SERP importer and parser for Google & YouTube Connect to Google's API and get the search data you want. Multiple search parameters supported, all in one function call, and all results returned in a DataFrame
  • Tutorials and additional resources

Text & Content Analysis (for SEO & Social Media)

URLs, page titles, tweets, video descriptions, comments, hashtags are some exmaples of the types of text we deal with. advertools provides a few options for text analysis

  • Word frequency Counting words in a text list is one of the most basic and important tasks in text mining. What is also important is counting those words by taking in consideration their relative weights in the dataset. word_frequency does just that.
  • URL Analysis We all have to handle many thousands of URLs in reports, crawls, social media extracts, XML sitemaps and so on. url_to_df converts your URLs into easily readable DataFrames.
  • Emoji Produced with one click, extremely expressive, highly diverse (3k+ emoji), and very popular, it's important to capture what people are trying to communicate with emoji. Extracting emoji, get their names, groups, and sub-groups is possible. The full emoji database is also available for convenience, as well as an emoji_search function in case you want some ideas for your next social media or any kind of communication
  • extract_ functions The text that we deal with contains many elements and entities that have their own special meaning and usage. There is a group of convenience functions to help in extracting and getting basic statistics about structured entities in text; emoji, hashtags, mentions, currency, numbers, URLs, questions and more. You can also provide a special regex for your own needs.
  • Stopwords A list of stopwords in forty different languages to help in text analysis.
  • Tutorial on DataCamp for creating the word_frequency function and explaining the importance of the difference between absolute and weighted word frequency
  • Text Analysis for Online Marketers An introductory article on SEMrush

Social Media

In addition to the text analysis techniques provided, you can also connect to the Twitter and YouTube data APIs. The main benefits of using advertools for this:

  • Handles pagination and request limits: typically every API has a limited number of results that it returns. You have to handle pagination when you need more than the limit per request, which you typically do. This is handled by default
  • DataFrame results: APIs send you back data in a formats that need to be parsed and cleaned so you can more easily start your analysis. This is also handled automatically
  • Multiple requests: in YouTube's case you might want to request data for the same query across several countries, languages, channels, etc. You can specify them all in one request and get the product of all the requests in one response
  • Tutorials and additional resources
  • A visual tool to check what is trending on Twitter for all available locations
  • A Twitter data analysis dashboard with many options
  • How to use the Twitter data API with Python
  • Extracting entities from social media posts tutorial on Kaggle
  • Analyzing 131k tweets by European Football clubs tutorial on Kaggle
  • An overview of the YouTube data API with Python

Conventions

Function names mostly start with the object you are working on, so you can use autocomplete to discover other options:

kw_: for keywords-related functions
ad_: for ad-related functions
url_: URL tracking and generation
extract_: for extracting entities from social media posts (mentions, hashtags, emoji, etc.)
emoji_: emoji related functions and objects
twitter: a module for querying the Twitter API and getting results in a DataFrame
youtube: a module for querying the YouTube Data API and getting results in a DataFrame
serp_: get search engine results pages in a DataFrame, currently available: Google and YouTube
crawl: a function you will probably use a lot if you do SEO
*_to_df: a set of convenience functions for converting to DataFrames (log files, XML sitemaps, robots.txt files, and lists of URLs)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages