Monday, October 24, 2005

Free Xenon is Here

This is a not so temporary BLog set up until I can find a more permanent solution. It will also take me quite some time to create a new theme for the BLog as I have many other projects that demand my attention such as designing the site for my mothers' insurance agency. I will be adding links and other content as I go.

The previous articles that I have posted are the articles that I have written and can publish from the internal BLog that run at work. I will clean them up as I go over the next week or so. I have one other article to post and will in the next week or so as well titled "Separation of Layers".

If you have questions, comments, or suggestions. feel free to contact me.

Browsers

If you are visiting my BLog for the first time and you do not know about the Firefox browser then read about it and go and get it. It is the web browser for web developers every where. It will change the way you look at the web.

CSS Fonts and Font Sizing

We have had some issues about fonts and font sizes. I though I would give some information regarding this.

Font Sizes

This odd 100.01% value for the font size compensates for several browser bugs. First, setting a default body font size in percent (instead of em) eliminates an IE/Win problem with growing or shrinking fonts out of proportion if they are later set in ems in other elements. Additionally, some versions of Opera will draw a default font-size of 100% too small compared to other browsers. Safari, on the other hand, has a problem with a font-size of 101%. The current "best" suggestion is to use the 100.01% value for this property.

Font Type

Blurring the Lines of Separation

There are several layers to web development and that the Layers should be, as a rule, kept separate. I read the recent A list Apart article named JavaScript Triggers that stated the following:

Technically, adding this information to the class attribute is possible, but is it allowed to use this attribute for carrying information it was not designed to carry? Does this violate the separation of behavior and presentation? Even if you feel there is no theoretical obstacle, it remains a complicated solution that requires complicated JavaScript code.

This specific statement was said in response to posing the situation of adding a CSS class for the sole purpose of acting as a hook to initiate a javascript behavior. In other words adding an empty (i.e. unstyled) class to trigger Javascript.

There is nothing inherently wrong or invalid from a standards standpoint. You are adding a class that has no styling associated. No problem. I do agree that adding a class for the sole purpose of triggering javascript should not be done as it does blur the lines of separation. I am, however, more pragmatic in my design philosophy. I am, in away, a Pragmatic Purist.

I am currently looking at using a CSS class to trigger a javascript code. However, in my case the CSS class(es) are being extensively used (not empty), and the javascript to be used is only there to reproduce a CSS behavior (Pop-up Menus) that is not supported by other browsers (Internet Explorer). Is this wrong. I may be a bit biased, but I say no. The Presentation is separate from Content, and the aforementioned is separate from Behavior Wait, you just said "CSS behavior" that is a presentation based behavior and therefore not separate. I will cover that topic below, but for now let us just say this is acceptable. I am using javascript to reinforce an existing behavior (albeit a CSS behavior). Not a problem.

Other Blurring

I started to think if there are existing portions of the standards that might blur the lines between the different Layers (specifically Presentation and Behavior). The Hover, Focus, Activate states of links (being the most prevalent and as it applies to other elements) and the various CSS techniques used to create Pop-up Menus (display None, and Absolute positioning to move content far left). These result in a CSS based behavior (i.e. a response from the webpage based on user interaction). If the user interacts with the site the site responds by changing the color of links or showing a popup menu this is a behavior This can all be done without Javascript. Bow down before the power of CSS and it's mighty ability to blur the lines.

Now, using these techniques are perfectly valid from an efficiency and standards standpoint, but is it from a design perspective. By using the very basic (Hover, Activate, and Focus states) and the advanced features of CSS we are introducing behavior into the presentation layer. Is this all bad? In the development community these are widely accepted and widely used techniques. Will it be in the future? I think so.

Separation Purists might stage an uprising and try to make monumental changes to the standards to ensure separation from content, presentation, and behavior, but it should be quelled rather quickly. Changing something as engrained in web development as link states are, is not going to happen. This would require a subtle but, large paradigm shift the standards bodies. Websites would require javascript to change links states. This is not going to happen. We like our links states and pure CSS menus thank you. They are light and efficient, easy to maintain and use.

Internet Explorer and Separation

Pure bleeding edge sites do not function properly in Internet Explorer because it does not support the CSS hover event on non link elements. The process of writing this made me think.... maybe Microsoft is inadvertently doing a good thing for the development community by not being standards compliant (in this specific instance). Websites cannot reliably use a Pure CSS based pop-out menu because Internet Explorer does not support it. This does force developers to generate the pop-out behavior via javascript which forces layer separation. Good?? Begrudgingly I will say 'Yes', but it does not mean I have to like it. Internet Explorer only supports the CSS hover event on Links, which I think will always be the case for all versions of the standard. Link behaviours will be forever - hover event on other elements... hmmmm ... we will see. For now I will create my CSS based menus and use javascript backup for those browsers that do not support it. The best of both worlds.

Sunday, October 23, 2005

Lorem Ipsum and other such non-sense...

I am updating the site to include some more useful links. Here are some links about Lorem ipsum: Description and Ungreek's Differring Source (Lorem Ipsum) Generator. More than you ever wanted to know about the correct usage of abbr and acronym courtesy of Lars Holst. To top it all off we have a little bit about what the heck this scroll lock button is on our keyboards...

Monday, October 17, 2005

Robots, Spiders, and other Crawlers... Oh My!!

Robots are automated programs spawned by search engines to find and index content. Each search engine has it's own robot, which are sometimes referred to as spiders, that (usually) leave a customized User Agent String to identify itself to web servers. Here are example user agent strings found on our server logs:

Googlebot/2.1+(+http://www.google.com/bot.html)  // Google's Spider
Mozilla/5.0+(compatible;+Yahoo!+Slurp;+http://help.yahoo.com/help/us/ysearch/slurp)  //Yahoo's Spider

These 2 are nice spiders as they identify themselves appropriately. User Agents do not have to identify themselves correctly, so sometimes spotting them (if you are looking for them) can be difficult. Most play nice - yea!

I, Robot(.txt)

The robots.txt that I have creates is as follows:

User-agent: *
Disallow: /_css/
Disallow: /_images/
Disallow: /_scripts/
Disallow: /_test/
Disallow: /data/stuff/*.htm
Disallow: /mm2css/
Disallow: /mm2img/
Disallow: /mm2script/

There is not much to it. It is a really simple process to create one. The first part of a robots.txt is the User-Agent line to which you specify for which User Agents the following rules apply to. We Specified * which means all all user agents. We can specify rules for specific User Agents such as Googlebot by the following: User-Agent:Googlebot, and the rules following it would be picked up by Googlebot. Nothing forces a Bot or Spider to follow the rules that is specified in the robots.txt file. They are followed by choice.

The next statements are exclusion statements. We tell the User Agents what directories to exclude from indexing. Some bots will support file level exclusions. I use both here. I have disallowed all image, scripting and CSS as they cannot be indexed by robots, so I will just save them the time. I have also excluded _test directories as they do not have indexable data. The last thing I excluded was *.htm in the Stuff Directory. We do not want people going to stuff. The index.html is still indexed for the stuff directory because it has the .html extension, but the "subordinate" pages are not as they have the .htm extension. The robots.txt file is looked for at the site's root.

Other Means to Restrict Access

No matter what you put in your robots.txt file a bot can always have aceess to your resources as the robots.txt file is not forced on user agents. If you want to ensure user agents do not have access to your resources secure it with permissions.

To reduce the chance that a bot will look to index your site you can try to ensure that there are no links to it. If there are no links then a bot will have very little reason to index it.

Thursday, October 13, 2005

CSS 2.1 Specificity Calculation

There has been a round of posts that show how to calculate CSS 2.1 Specificity for a given rule to determine which style takes precedence on Molly.com, Malarky. There is also some info on HTML Dog, Meyer Web, and Juicy Studio as well.

It is not common to have problems with this. I have only had confusion regaurding sepcificity for a given style once or twice, so it is something that is nice to be aware of incase you encounter it.