Written on 9/23/2014 in Web development

Why Im not going Angular


AngularJS

So, the past few days I was thinking and working on reworking the blog front-end. My plan was to use AngularJS and NancyFx, both quite efficient frameworks. I like NancyFx because of the minimal approach and I still do. I also have some experience with AngularJS (see previous posts) and was all-round very happy with the results. So I started analysing what I wanted to change.

  • Add a homepage aside from the blog
  • New looks & Simpler design
  • Some interactivity, where possible
  • Fast loading time, preferably asynchronously

It was all fitting the bill, I started development and it took me less than an hour to set up the architecture. The architecture was simple1 . I even set up a rather nifty mechanism to easily set up loading-indicators, errors, and results on multiple components/places on my page. Read: 'Load data X => show loading indicator on page element X => Show results / errors in page element X'. Nothing really amazing, but I liked how it worked as simple as it did. I almost finished building up my entire page, layout and everything before I stumbled on my first setback. Error handling.

AngularJS Error handling?

I wasn't using 'ngroute', a component of AngularJS that you can use to do client-side routing. I'm not a fan of client-side routing so I chose for the safer option in NancyFx. This brought with it: the problem of error handling. Since my data is loaded asynchronously from the Web API, I have no out-of-the box way of knowing on the*web-server side* whether the page/post exists. So the API call for a certain post on the client, may result in a '404 - Not Found'. I didn't believe this would be a problem. Because even if there was a way of knowing which posts are available on my web-server there still is a lot that could go wrong in getting data from the API. The client side should definitely be doing error handling of erroneous API results.

I stumbled upon this problem early on and guessed it was a common problem. I looked up ways for handling erroneous API calls and found following options after a bit of filtering:

  1. Show on the page that the resource was not found and why
  2. Redirect the user to an error page using a JavaScript redirect2
  3. Make sure on the web-server side whether the resources exist, and return an error page if not

Option #1

This option is easy to implement and my choice of action. It's simple, it can be made UX friendly: reloading the page is equal to a retry. And it can be made page specific. But there are downsides. Most important downside3: I did not find a foolproof way of making sure the errors don't end up in search results. Another downside: It's harder to make this error handling dynamic and working for every page out of the box. This was less of a problem, I like more work4.

Option #2

Also an easy solution. Can be done globally for all requests, or handled for a specific request. You can also set the meta tag on all error-pages so robots don't index the page. This should make sure your error pages aren't turning up in search results. The downside here is primarily bad UX. Redirecting the user to an error-page means delivering an entirely different page. It also means that the user can't just spam the reload button. To retry he would have to click a link or the back button.

The back button brings us to another annoyance of this option. When the user visits page X, gets an error and gets redirected to error-page Y, he could be stuck in a back-button-loop. In other words: he keeps pressing the browser back-button and keeps getting redirected to the error-page. Everyone knows that website creators doing this have a special place reserved for them in hell, and it's probably on the throne considering the evilness it holds.

Option #3

This really isn't an option, I already went over this solution a few paragraphs back. There is too much going on client-side to rely on the server-side to make sure no errors can occur.

Conclusion

So, no really good option there. Because I was looking for previous experiences I also came across quite a few posts detailing 'how' using AngularJS can give SEO problems. Which is understandable, you can't blame AngularJS, because it just wasn't made with SEO in mind. When a robot crawls your page he won't be waiting for the content to arrive, and he won't hold failed requests into account. The robot is just doing its work so don't give him a hard time.

There are solutions out there for making Angular apps SEO friendly5. Most of them use the same principle. You detect on the server whether a page is requested by a robot. If it's a robot, you render the entire page on the server and send back the static result. That's all great, but for my case, it's a work around. If you research the framework first, you'd find out why you shouldn't use it on your small-scale blog and not waste valuable time. I didn't do my research and blindly fell into the AngularJS trap, thinking it was good for all cases. But it isn't.

Just because AngularJS doesn't have a use-case for a small-scale blog, doesn't mean it has no use-case anywhere that SEO is involved. The workarounds are definitely worth the time and price when you have to develop a larger interactive web application that also needs SEO. I believe, I haven't checked this at all, that Twitch uses AngularJS and uses one of the pre-render solutions for SEO. Which is a very valid use case.

UPDATE #1:

Google has some documents on how you can implement SEO on an angular website, I only just stumbled upon this. It's very helpful to understand how the crawlers try to index an ajax loaded page.

Crawling ajax pages

Newer Older Top
blog comments powered by Disqus