Archive for March, 2010

Secrets of Social Media Buzz Marketing from CMO of Virgin America

Last week, I had the opportunity to hear Porter Gail speak about how Virgin America has launched and marketed a new airline with a much smaller budget than their competition. To build their brand and sales, Porter and Virgin America have used a clever combination of inbound marketing tactics like event buzz, cool content and social media interactions.

  1. Select Your Target Customer and Grok Them. Knowing that they were launching in only a few cities the first few years, Virgin America knew that they could not win the frequent business traveler who needs a huge network of airports and lots of schedule options. Virgin America decided to focus on a tech savvy and online-centric consumer, which made sense given their hub is in San Francisco and they serve cities like Los Angeles, Boston, New York and Seattle. Virgin America designs their entire customer experience around this consumer, from offering inflight wifi internet service to having touch screen entertainment systems to using channels like YouTube and Twitter to communicate with their audience and even having colored “mood lighting” on planes.
  2. Create Buzz Worthy Experiences. Virgin does not have enough money to compete with other larger airlines using outbound marketing (think of all the TV ads you see for air travel), so they rely on their customers to create content and share that content to build their brand. How do they get their customers to create and share content? They give their customers experiences worth talking about. They do everything they can to make the flight experience remarkable – the offer food on demand whenever you want – not just when the cart passes, they have power plugs at your seat for laptops and other devices, and a touch screen entertainment system offering music, TV and movies. Then they let the magic happen and the customers talk about those experiences.
  3. Connect, Don’t Market. We’ve talked a lot in our webinars and in this blog about how it is critical in social media marketing to be a valuable resource, not broadcast the benefits of your product. Virgin America lives that to the fullest – they see social media as a communication channel, not a broadcast medium, and they use it as an opportunity to learn and improve about their product and experience, not a channel to pump out their latest specials. Looking at Twitter specifically and using Twitter Grade as a measure of impact, Virgin America gets 100 with 63,000 followers which is a strong presence. However, some of their competition got started earlier and pushed harder, and have achieved even more remarkable results: Southwest gets 100 as well but has over 1 million followers, and similarly JetBlue gets 100 but has 1.6 million followers.
  4. Leverage partnerships. When Virgin America launched service to Las Vegas, they did a partnership with the TV show Entourage, and got tons and tons of media exposure for free by cross promoting the show and their airline. And Entourage is a show that has a lot of viewership with their target audience of younger and tech savvy consumers, so the connection made sense. Other examples are a partnership with YouTube where they live streamed from one of their airplanes during a YouTube Live event, and working with Victoria’s Secret to have an in flight fashion show.

What do you think? Are there any lessons from Virgin America that apply to marketing your business?

Connect with HubSpot:

HubSpot on Twitter HubSpot on Facebook HubSpot on LinkedIn HubSpot on Google Buzz 

 


Go to Source

Determining Whether a Page/Site Passes Link Juice (and How Much)

Posted by randfish

We’ve been hearing some requests lately for some really advanced, expert-level content, and this post is here to deliver. I’ve built up a short list of topics that deal with more cutting edge SEO, and if there’s interest in this series, I’ll try to make it a regular part of the blog. These tactics aren’t black or gray hat (we’re not advocates of that kind of thing), but they’re very specific in use and tend to be at the opposite end of the "low-hanging fruit" basket.

The first in the series touches on a common SEO problem – determing if a link has value and how much. This tactic isn’t low effort, so it should only be employed when the link or link source is particularly critical.

Testing Whether a Page/Site Passes Link Juice (and How Much)

Scenario: You’ve found some potentially valuable, but possibly suspect link sources. These could include things like a seemingly high quality directory that requires payment or a site you’re worried may have aroused Google’s ire for one reason or another. The need for a credible answer applies anytime you’re unsure whether a link is counting in Google’s rankings and need to know.

Tactic: Find a page that’s already in Google’s index and a somewhat random combination of words/phrases from that page’s title and body for which it ranks in position #3-10. For example, with the query – http://www.google.com/search?q=new+york+presentation+morning+entitled+link, my blog post from last week on Link Magnets ranks #3. The query itself is not particularly competitive and the pages outranking it don’t have the exact text in the title or domain name (a critical part of the process).

If I now place a link with the exact anchor text from another page (like the blog post you’re reading now), e.g. new york presentation morning entitled link, I should be able to see, once this post is indexed by Google’s spider, whether it passes link juice. The result will be positive if the page moves up 2-4 positions in ranking and I can be fairly assured that the link is indeed "Google-friendly." With that knowledge secure, I can change the anchor text and/or repoint the link to the desired location. I don’t simply use the anchor text I want initially because with competitive queries, a single link may not make enough difference for the ranking impact to be visible and I don’t want to waste my time/money/energy.

Testing the Flow of Link Juice

(Metrics displayed in the SERPs via mozbar)

Special Requirements: To make the testing work, you’ll need to be able to repoint the link, change the anchor text or 301 redirect the linked-to page (though the last of these is the least desirable, since 301s lose some link juice in the process and good anchor text is so valuable for ranking in Google). Also, here at SEOmoz, we don’t recommend buying links, so while this tactic could be applied to that process, remember that manipulative links may later be devalued, wasting all that time and effort you spent acquiring them.

Results: With this technique, you can not only get a yes/no answer to questions about whether the link passes ranking value, but a rough sense for how much (depending on the position change – this can be a good reason to use pages that rank in the #7-10 range). Do take care to record the ranking positions of all the pages in the results and leave the test running for 1-2 weeks (longer if there’s very fresh results ranking for the query). If you don’t, other factors may conflate to hide the true results.

I’m looking foward to your feedback about this technique – and let us know if you’re interested in seeing more of this advanced/edge-case content on the blog, too. Below, I’ve listed the topics I could tackle in future "Advanced" level posts.

  • Hosting Pages on Third-Party Sites
  • oDesk/Mechanical Turk for Content Development (and Link Research)
  • Email Marketing for Search Personalization
  • Modifying Product/Business Naming Conventions
  • Spiking Search Volume and Capitalizing on QDF
  • Protecting Inter-Network Links & Domain Acquisitions from Devaluation

p.s. If you do like this kind of thing, I’d also suggest:

  1. Register for SMX Advanced: Seattle or SMX Advanced: London – both are quite good and SEOmoz will be sending speakers to both. You can use the code SEOmoz@SMX for a 10% discount to either event.
  2. Check out the SEOmoz Expert Training Series DVD, which just launched last week. The video alone will get you pretty excited :-)
  3. PRO members should check out our libraries of tips, video content and webinars.

I’m in Tampa, then Miami this week, but will finally return to Seattle for some much needed time in the office next Monday. Until then, blogging, commenting & email may be a bit slow from me.

Do you like this post? Yes No


Go to Source

Diagnosing Google Crawl Allowance Using Webmaster Tools & Excel

Posted by Tom_C

There’s been some talk recently in the SEO industry about ‘crawl allowance’ – it’s not a new concept but Matt Cutts recently talked about it openly with Eric Enge at StoneTemple (and you can see Rand’s illustrated guide too). One big question however is how do you understand how Google is crawling your site? While there are a variety of different ways of measuring this (log files is one obvious solution) the process I’m outlining in this post can be done with no technical knowledge – all you need is:

  • A verified Google webmaster central account
  • Google Analytics
  • Excel

If you want to go down the log-file route then these two posts from Ian Laurie on how to read log filesanalysing log files for SEO might be useful. It’s worth pointing out however that just because Googlebot crawled a page it doesn’t necessarily mean that it was actually indexed. This might seem weird but if you’ve ever looked in log files you’ll see that sometimes Googlebot will crawl an insane number of pages but it often takes more than one visit to actually take a copy of the page and store it in it’s cache. That’s why I think the below method is actually quite accurate, by using a combination of URLs receiving at least 1 visit from Google and pages with internal links as reported by webmaster central. Still, taking your log file data and adding it into the below process as a 3rd data set would make things better (more data = good!).

Anyway, enough theory, here’s a non technical step by step process to help you understand which pages Google is crawling on your site and compare that to which pages are actually getting traffic.

Step 1 – Download the internal links

Go to webmaster central and navigate to the "internal links" section:

Then, once you’re on the internal links page click "download this table":

This will give you the table of pages which Google sees internal links to. Note – for the rest of this post I’m going to be treating this data as an estimate of Google’s crawl. See a brief discussion about this at the top of the post. I feel it’s more accurate than using a site: search in Google. It does have some pitfalls however since what this report is actually telling you is the number of pages with links to them, not the pages which Google has crawled. Still, it’s not a bad measure of Google’s index and only really becomes inaccurate when there are a lot of nofollowed internal links or pages blocked by robots.txt (which you link to).

Step 2 – Grab your landing pages from Google Analytics

This step should be familiar to all of you who have Google Analytics – go into your organic Google traffic report from the last 30 days, display the landing pages and download the data.

Note that you need to add "&limit=50000" into the URL before you hit "export as CSV" to ensure you get the as much data a possible. If you have more than 50000 landing pages then I suggest you either try a shorter date range or a more advanced method (see my reference to log files above).

Step 3 – Put both sets of data in excel

Now you need to put both of these sets of data into excel – I find it helpful to put all of the data into the same sheet in Excel but it’s not actually necessary. You’ll have something like this with link data for your URLs from webmaster central on the left and the visits data from Google Analytics on the right:

Step 4 – Vlookup ftw

Gogo gadget vlookup! The vlookup function was made for data sets like this and easily lets you look up the values in one data set against another data set. I advise running a vlookup twice for each data set so we get something like this:

Note – that there may be some missing data in here depending on how fresh the content is on your site (this is possibly enough room for a whole separate post on this topic) so you should then find and replace ‘#N/A’ with 0.

Step 5 – Categorise your urls

Now, for the purposes of this post we’re not interested in a URL by URL approach, we’re instead looking at a high level analysis of what’s going on so we want to categorise our URLs. Now, the more detail you can go into at this step the better your final data output will be. So go ahead and write a rule in excel to assign a category to your URLs. This could be anything from just following a folder structure or it could be more complex based on query string etc. It really depends on how your site structure works as to the best way of doing it so I can’t write this rule for you unfortunately. Still, once this is done you should see something like this:

If you’re struggling to build an excel rule for your pages and your site follows a standard site.com/category/sub-category/product URL template then a really simple categorisation would be to just count the number of ‘/’s in the URL. It won’t tell you which category the URL belongs to but it will at least give you a basic categorisation of which level the page sits at. I really do think it’s worth the effort to a) learn excel and b) categorise your URLs well. The better data you can add at this stage the better your results will be.

Step 6 – Pivot table Excel Ninja goodness

Now, we need the magic of pivot tables to come to our rescue and tell us the aggregated information about our categories. I suggest that you pivot both sets of data separately to get the data from both sources. Your pivot should look something like this for both sets of data:

It’s important to note here that what we’re interested in is the COUNT of the links from webmaster central (i.e. the number of pages indexed) rather than the SUM (which is the default). Doing this for both sets of data will give you something like the following two pivots:

And:

Step 7 – Combine the two pivots

Now what we want to do is take the count of links from the first pivot (from webmaster central) and the sum of the visits from the second pivot (from Google Analytics), to produce something like this:

Generating the 4 columns on the right is really easy by just looking at the percentages and ratios of the first 3 columns.

Conclusions

25% of the crawl allowance accounts for only 2% of the overall organic traffic

So, what should jump out at us from this site here is that the ‘search’ pages and ‘other’ pages are being quite aggressively crawled with 25% of the overall site crawl between them yet they only account for 2% of the overall search traffic. Now in this particular example this might seem like quite a basic thing to highlight – afterall a good SEO will be able to spot search pages being crawled by doing a site review but being able to back this up with data makes for good management-friendly reports and will also help analyse the scope of the problem. What this report also highlights is that if your site is maxing out it’s crawl allowance then reclaiming that 25% of your crawl allowance from search pages may lead to an increase in the number of pages crawled from your category pages which are the pages which pull in good search traffic.

Update: Patrick from Branded3 has just written a post on this very topic – Patrick’s approach using separate XML sitemaps for different site sections is well worth a read and complements what I’ve written about here very nicely.

Do you like this post? Yes No


Go to Source

HubSpot TV – Mike and Karen Live in Las Vegas

Watch HubSpot TV live at 4:00 p.m. ET on Friday at www.hubspot.tv.

Episode #84 – March 19, 2010
(24 minutes, 28 seconds)

Intro

  • How to interact on Twitter: Include hashtag #HubSpotTV in your tweets! On the show today is Mike Volpe (@mvolpe) and Karen Rubin (@karenrubin
  • As always, all the old episodes are in iTunes: http://itunes.hubspot.tv/. If you like the show, please leave a review!

Big Brands Getting The Game On

MySpace Starts Selling User Data

More People Go To Facebook Than Google

Customer Service as Marketing – Nordstrom WINS!

HubSpot Hits South by Southwest

Forum Fodder

Marketing Tip of the Week: Get out and meet people at in-person events!

Closing

 

Video: How to Use Social Media to Manage Your Company Brand Online

social-media-brand-presense Learn how to use social media to manage your company brand.

Download the free video and learn how to manage your company brand effectively using social media.

Connect with HubSpot:

HubSpot on Twitter HubSpot on Facebook HubSpot on LinkedIn HubSpot on Google Buzz 

 


Go to Source

Linkscape Index Update and a Peek Behind the Curtains

Posted by Nick Gerner

Last week we updated the Linkscape index, and we’ve been doing it again this week.  As I’ve pointed out in the past, up-to-date data is critical.  So we’re pushing everyone around here just about as hard as we can to provide that to you.  This time we’ve got updated information on over 43 billion urls, 275 million sub-domains, 77 million root domains, and 445 billion links.  For those keeping track, the next update should be around April 15.

I’ve got three important points in this post.  So for your click-y enjoyment:

Fresh Index Size Over Time

If you’ve been keeping track, you may have noticed a drop in pages and links in our index in the last two or three months.  You’ll notice that I call these graphs "Fresh Index Size", by which I mean that these numbers by and large reflect only what we verified in the prior month.  So what happened to those links?

Linkscape Index Size: Pages

Linkcape Index Size: Links

Note: "March – 2" is the most recent update (since we had two updates this month!)

At the end of January, in response to user feedback, we changed our methodology around what we update and include.  One of the things we hear a lot is, "awesome index, but where’s my site?"  Or perhaps, "great links, but I know this site links to me, where is it?" Internally we also discovered a number sites that generate technically distinct content, but with no extra value for our index.  One of my favorite examples of such a site is tnid.org.  So we cut pages like those, and made an extra effort to include sites which previously had been excluded.  And the results are good:

Linkscape Index Size: Domains

I’m actually really excited about this because our numbers are now very much in line with Netcraft’s survey of active sites.  But more importantly, I hope you are pleased too.

Linkscape Processing Pipeline

I’ve been spending time with Kate, our new VP of Engineering, bringing her up to speed about our technology.  In addition to announcing the updated data, I also wanted to share some of our discussions.  Below is a diagram of our monthly (well, 3-5 week) pipeline.

Linkscape Index Pipeline

You can think of the open web as having essentially an endless supply of URLs to crawl, representing many petabytes of content. From that we select a much smaller set of pages to get updated content for on a monthly basis.  In large part, this is due to politeness considerations: there’s about 2.6 million seconds in a month, and most sites won’t tolerate fetching one page a second by a bot.  So we only can get updated content for so many pages in a month.

From the updated content we get, we discover a very large amount of new content, representing a petabyte or more of new data. From this we merge non-canonical forms, and remove duplicates, as well as synthesize some powerful metrics like Page Authority, Domain Authority, mozRank, etc.

Once we’ve got that data prepared, we drop our old (by then out of date) data, and push the updated information to our API.  On about a monthly basis we turn over about 50 billion urls, representing hundreds of terabytes of information.

What Happened To Last Week’s Update

In the spirit of TAGFEE, I feel like I need to take some responsibility for last week’s late update, and explain what happened.

One of the big goals we’ve got is to give fresh data.  One way we can do that is to shorten the amount of time between getting raw content and processing it.  That corresponds to the "Newly Discovered Content" section of the chart above.  For the last update we doubled the size of our infrastructure.  In addition to doubling the number of computers we have running around analyzing and synthesizing data, it actually increased the coordination between those computers.  If everyone has to talk to everyone else, and you double the number of people, you actually quadruple the number of relationships. This caused lots of problems we had to deal with at various times.

Another nasty side-effect of all of this was this made machine failures even more common than we experienced before.  If you know anything about Amazon Web Services and Elastic Computer Cloud then you know that those instances go down a lot :)  So we needed an extra four days to get the data out.

Fortunately we’ve taken this as an opportunity to improve our infrastructure, fault tolerance and lots of other good tech start-up buzz words.  Which is one of the reasons we’re able to get this update out so quickly after the previous one.

As always, we really appreciate feedback, so keep it coming!

Do you like this post? Yes No


Go to Source

Are You Making Marketing Decisions Like Bill Polian?

coin tossThis week, the NFL voted to change how sudden-death overtime is handled during playoff games.

There’s a ton of factors that lead to this decision, but the key factor behind this latest change comes from rock hard data. As Bill Polian, the president of the Indianapolis Colts, states, “Plenty of people on the committee, myself included, are so-called traditionalists. I am proud to be one. But once you saw the statistics, it became obvious we had to do something.”

Polian is referring to the 15 years worth of data that has been collected since the kick-off distance rule was changed, which has the ball start on the 30 yard line instead of the 35 yard line. This 5 yard change started a trend where the receiving team had slightly better field position. When you take a look at the numbers, nearly 59.8% of the time, the team that wins the overtime coin toss and takes possession of the ball will win the game, and an astounding 34.4% of all overtime wins were coming on this first possession.

From a pure numbers standpoint, if a playoff game has made it all the way to overtime, the split between winning and losing the game shouldn’t be skewed so far towards winning the coin toss, and the NFL has voted to change this ruling.

This Blog isn’t About Football, its About Marketing

When you look at everything you do with your marketing budget, ask yourself “Why am I doing this?”. If the answer you give yourself is “Because we’ve always done this”, make sure you look at the data.

How To Question Your Marketing Budget

To get answers that help improve business, it is important to ask the right questions. Here are a few questions that can help when you are thinking about making changes to your marketing budgets.

  • Is this effort and money driving traffic?
  • Is that traffic becoming leads?
  • Is the sales team closing those leads as customers?
  • What is my best lead source?
  • What is my cost-per-lead for each marketing activity?

Don’t let traditionalist tendencies get in the way of radical changes that can improve your bottom line. Follow the data. Stick with strategies that work and abandon ones that don’t

Stop waiting for your turn to receive the ball when you lost the sudden-death coin toss.

Connect with HubSpot:

HubSpot on Twitter HubSpot on Facebook HubSpot on LinkedIn HubSpot on Google Buzz 

 


Go to Source

March 2010
M T W T F S S
« Feb   Apr »
1234567
891011121314
15161718192021
22232425262728
293031  
Links

SEO products: