PseudoSavant https://pseudosavant.com/blog The Musings of Paul Ellis Sun, 23 Jun 2019 22:48:39 +0000 en-US hourly 1 https://wordpress.org/?v=5.6 4146239 Meta “When is it stealing?” Inception. https://pseudosavant.com/blog/2014/02/27/meta-when-is-it-stealing-inception/ Fri, 28 Feb 2014 07:50:25 +0000 https://pseudosavant.com/blog/?p=692 Shane HanselmanAlmost anyone who has a blog has found the entire contents of their blog copied on various blogs around the world. A startup has taken this wholesale stealing borrowing republishing syndicating to a new place: professional narration. Unfortunately it takes <sarcasm>a lot of work</sarcasm> to figure out how to ask for permission to reuse someone’s content, as Scott Hanselman recently found out.

If you are interested in the narration of Scott’s blog post about narrated blog post then you can download the MP3 or check it out the YouTube video below. I figure it is ok that I give this content away since it would be unusually difficult for me to figure out how to ask permission first.

For those who would rather read a transcript of the narration of his blog post about narrated blog posts I have included the automatic closed captions that YouTube generates below:

when is it stealing as posted on handsome and dot com written by Scott Hanselman anything you put on the Internet is going to get stolen its prefer a beloved shared a link to a but often he gets copied and copied again RSS is magical but it makes it even easier to programmatically syndicate copy content church around and you’ll likely find complete copies of your entire blog mirrored in other countries there’s so many web sites now media empires that have taken aggregation to the extreme giving it the more palatable name content curation not to be clear I respect the work involved in curation sites like dumbest weeds dot com require work and attribute creators but taking a post copying unique content even paraphrasing and then including a small Inc just as in kind forget about the legality of it remembering I N O but it’s just porn etiquette to not ask permission before using non Creative Commons content every week or two I got an email from some large aggregation site it says we’d love to reprint your post it’ll get you more readers the few times I’ve done is they’ve gotten 50,000 views and I’ve gotten three hundred referral views likely because the original appeared on handsome and dot com link at the bottom is in 4.5 aunt sites like buyer on over and BuzzFeed are effectively reblogging and embedding machines powered by link bait copywriters what happened next will shock you even if you make a piece of software someone may just wrapped slash embed your installer with their own installer and all the whole business around it me reading your blog posts today was pointed out to me that in nearly seven year old and not very good blog post a mine had been narrated effectively turned into a podcast by a startup called mono by the way it’s more than a little ironic that my post wasn’t even mine it’s an excerpt published with permission a friend Patrick Caldwell’s larger post I’ve used to you mono developer tools and embedded in a rated version here first let me just say that this is essentially a great idea it’s the opposite of transcribing a podcast it’s creating podcasts from existing content using professional narrator’s not just text to speech could be great for not just the visually impaired but also for anyone who wants to catch up on blogs while commuting where the content come from here’s a screenshot of the post on a mano site you can see my name handsome man is there but it’s not a link the headline is a link but you never know until you have read over it there’s really no easy way to tell where when and how this content came about I think that Amano could easily redesigned the side to put the content owner front-and-center podcast an audio snippets from blog post great idea except I wrote the script for this podcast and if I wrote the script and made it the narration then this must be a partnership right however if we look at two my nose own terms abuse so three claims no ownership or control over any of the content you post to the service your User Content you or third party licensor as appropriate retain all copyright patent and trademark rights to any of the content you post on or through the service you are responsible for protecting those rights okay so they don’t own the content by posting your User Content on or through the service you grants 03 a universal non-exclusive world the free license to reproduce adapt distribute and publish such content to the service and for the purpose of promoting so three and its services I’m pretty sure I have been granted them a universal license to my content as I didn’t seem at this link on their homepage it says that you tell us what article should be voiced the community submits links sometimes the content there a fan of but don’t own then mono narrated you may not aggregate copy or duplicate any so three content week but I can’t copy their content their content that was generated from my content does this mean I can get a book from the library narrated turn into a podcast Adam on a lab at chance Orman I’m fairly sure audio book creators get permission from the original authors I’m told by tomatoes Twitter account that on the first person to object to the content being copied without permission Scott Hanselman Adam on lap a longtime proponent accessible content but surely I’m not the first person offended by discovering their content copied at chance a min the feedback we’ve been getting from bloggers as they appreciate the distribution plus value-added you are actually the first I certainly don’t but d’amato is malicious mottos perhaps naive if they think they can narrate blogs without someone speaking up that said their narrator’s are top notch and our site now for both attractive and usable frankly I’d be happy if they narrated by all blog or at least the good stuff and not a lousy decade-old stop made a podcast feed my blog like their competitor cast if I but I’d like a model to do it with me sites like this should ass creators first and their business model should be based on partnerships with content creators assumptions stitcher has the right idea I’ve submitted my content to them and entered into a partnership that in just suck my podcasts and make a radio station even a single email from a monologue hey we would like to hear your blog click here in San this little form would have been sufficient married first ask questions later Michael dunbar knows with this Tweet advancement Adam on a Web kami typically English but the whole thing could have been avoided with manners that in easily solved problem and it’s not just a problem a tomato this applies to all businesses and start-ups that rely on content created by others I think it’s important honor attribution this isn’t about money recon copyright all those things to apply rather this is about netiquette when you’re building a business model built around partnerships and transparency assumptions around fair use and copyright ask first what are your thoughts dear reader

]]>
692
Craftsman https://pseudosavant.com/blog/2014/02/13/craftsman/ Fri, 14 Feb 2014 07:48:14 +0000 https://pseudosavant.com/blog/?p=683 avatar-head-white-on-alpha-120There is a trait I have had for a very long time which I only recently consciously realized. It is that I aspire to be a craftsman*. Wikipedia describes a craft as “lying somewhere between an art (which relies on talent and technique) and a science (which relies on knowledge).”1 Something about being at the intersection of art and science has always been intoxicating to me.

I have always liked to create things, and appreciated the art of created things. As a senior in high school I took auto, wood, and metal shop at the same time, all year. It wasn’t just that I was just some student in those shop classes either, I was the top student. So even though generally speaking I was a horrible student in high-school, when it came to a class where I could create, I aspired to be a craftsman.

Some people are content with only learning enough to make a cutting board, and doing that over and over. A craftsman isn’t like that though. They are never content with where their skills are in their craft. They want to know how to use every tool in the shop so that they can create anything and everything. If there is something new, they want to know how to leverage it.

A craftsman is someone who equally values knowledge (what wood should I use), continued learning (how can I make a jig to create this piece), and practical application (creating the piece). Lastly, and I think, most importantly, a craftsman is the type of person who takes personal pride in what they create. They’d gladly sign their name on what they create. Nearly two decades later I’m still very proud of the first place winning quilted maple curio cabinet hanging in my house that I made as a high school student. It is probably the only thing I’m particularly proud of from high school in fact.

But I don’t create much using wood, metal, or socket wrenches anymore. My craft of choice now is software. The heart of software is creating things. Software is an amazing place where you can take the science of math and computers and apply it like art in a way to create something that you can use. And not only can you use it, but because of the economics of software you can basically give it away for free to everyone you know, or even don’t know for that matter.

Professionally, I am a product owner. I thrive on figuring out what to create (knowledge) and working with a team to build what previously didn’t exist (application). I’m drawn to other aspiring craftsmen being on my team. It is my opinion that great software is created by craftsmen.

As a hobby I love to code. Outside of work and family it is the number one thing I do, but I don’t think I could ever do it as my day job. Perhaps that’s because at home I can code just to enjoy the craft. I don’t have to worry about the strategy, deadlines, or other constraints that exist in a business. I can just craft code I find beautiful.

Hopefully in twenty years I’ll still be proud of the nuances of some of the code I write. Just like I do now knowing that my curio cabinet has dove-tail joints instead of dados, and book-matched quilted maple instead of a veneer maple plywood.

*It is my intention that the term ‘craftsman’ be considered a gender neutral noun.

1. http://en.wikipedia.org/wiki/Craft

]]>
683
JS 101: Cache your selectors https://pseudosavant.com/blog/2014/01/30/js-101-cache-your-selectors/ Thu, 30 Jan 2014 21:43:07 +0000 https://pseudosavant.com/blog/?p=678 javascript-icon.pngOne of the slowest things you can do with JavaScript is work with the DOM. And one of the slowest DOM operations is performing a query to find DOM elements. Caching those queries can have significant performance impacts. Here’s how you do it.

Here is an example of the type of code you shouldn’t write, but that I have seen many times. It needs to make multiple changes to some element(s) and performs the query selector each time it is needed.

You can ‘cache’ the response of queries you know you’ll use again in a variable however. Then each time you need to operate on those elements you just use the variable your assign them to. If there are queries you would use in multiple places in your app it can be a good idea to cache many queries at the start of your app so that you can reference them later.

Another good common practice is to prefix your cached variables with a `$` (like $ with jQuery) to indicate that it is a query response. I follow this pattern even when I don’t use jQuery as seen in the example above.

Another reason to use cached queries is that you can perform queries on all of the child elements of that element. Suppose you have a form and you will be working with many of the fields in the form to validate and submit the form. In this scenario I will typically query for the form first and then find the child elements of the form. In the example below it means instead of looking at every `input` on the page and checking whether it is a descendent of `.myForm` it will just look at the input fields that are child elements of `.myForm`.

The best way to manage caching of your queries though is to make the computer do it for you automatically. The function(s) below wrap around the native DOM `querySelectorAll` or jQuery and will automatically cache every object you lookup using them. This is actually better than caching your selectors in advance because the client only suffers the performance impact when that query actually gets done.

]]>
678
JS 101: Global Variables https://pseudosavant.com/blog/2014/01/29/javascript-101-global-variables/ https://pseudosavant.com/blog/2014/01/29/javascript-101-global-variables/#respond Wed, 29 Jan 2014 23:14:39 +0000 http://pseudosavant.com/blog/?p=645 javascript-icon.pngUnderstanding global variables and scope is very important in JavaScript. Misunderstanding what a variable’s scope is can lead odd bugs and broken code. This is made more problematic because the default scope in JavaScript is global. But what is global and how should you use it?

In JavaScript all code uses the same ‘global’ scope. The global scope is actually just a reference to the top-most object in the scope chain. On browsers the global object is `window`. Other environments, like node.js, have a different global object though.

Global Variables

Any variable in the top-most global scope is considered a global variable. In the example below `a` is a global variable because it was defined at the top-most scope. Global variables are actually just properties of the global object. Which is why `a === window.a` is true.

Generally speaking it is best to limit the number of global variables you use so that you don’t pollute the global namespace. You can prevent variables from being global by enclosing your code in an immediately invoked function expression (IIFE). Variable declared inside the function will be scoped to just the function.

Variables declared without `var` are implicitly global even if it is declared inside an IIFE. It is considered bad practice to declare variables without `var`, especially since it could be declared that way on accident. Static code analysis tools like JSHint specifically will warn you about it.

Sometimes you’ll want to export some variables to the global namespace however. The example below shows a ‘good’ way to create global variables. I assigned the local variable `y` to a global variable `z` which I can now access anywhere.

Here are some other ways to create global variables:

Global variable in any environment

Export functions to global

]]>
https://pseudosavant.com/blog/2014/01/29/javascript-101-global-variables/feed/ 0 645
Simple Web Project Deployments with BitTorrent Sync https://pseudosavant.com/blog/2013/07/18/simple-web-project-deployments-with-bittorrent-sync/ https://pseudosavant.com/blog/2013/07/18/simple-web-project-deployments-with-bittorrent-sync/#respond Thu, 18 Jul 2013 20:00:04 +0000 http://pseudosavant.com/blog/?p=638 bittorrent-sync-64I have been looking for a good solution for deploying my various web projects lately. It needed to be lightweight, easy to use, allow me to revert back quickly, and didn’t require a lot of server resources. I was leaning toward using Git but ended up using BitTorrent Sync.

You have probably heard of BitTorrent already for their ubiquitous peer-to-peer file sharing protocol and apps. BitTorrent Sync is a new beta product they have that uses a lot of the same core technology but accomplishes a different goal. It syncs folders on multiple devices (computers, tablets, phones, NAS drives, servers) without any central cloud storage service.

Using direct peer-to-peer communications has a several advantages over other cloud services like SkyDrive or Dropbox. It can sync very large files and folders without having to pay for extra cloud storage. It is more secure as transfers are encrypted by the clients and not stored on a remote server where the NSA could request them. Most importantly for my use case, it can be a lot faster.

For my deployments I wanted an easy way to push new folders or files from my development server at my house to my Windows Server 2012 web server ‘in the cloud’. There are a number of ways to do this from simple SFTP/SCP, to Git, or even IIS Web Deploy. Git was looking like a good fit, but to be honest for some small projects it seemed like overkill to have a Git repo setup.

Using BitTorrent Sync I now have a folder on my development machine that is synced with my web server. When I want to push new files I just copy them to the appropriate folder on my development machine. If I want to be able to rollback then I can just make a copy the existing folder(s) locally before I replace them. Since I’m always working with the files locally the changes are really quick, and BitTorrent Sync propagates them very quickly.

My first test was to update three WordPress blogs I maintain. To do that I needed to push 36MB of files to the server. What made this a good test is that the files are small so it was about 3000 files. Typically the low I/O performance of a single-threaded transfer protocol like SFTP or SCP makes an upload like this slow. Uploading those files using SCP took about 10 minutes but BitTorrent Sync did it in less than 2 minutes. Memory usage was also a concern of mine as I have a small VM on the Azure cloud, but most of the time it uses less than 10MB, and I haven’t seen it go higher than 40MB.

I decided to set it up with two-way synchronization, but I could have made it only sync from my PC to the server by setting it up as read-only for the server. A folder can be synced with more than two devices too, so it would be easy to allow someone else access to push files to my server. In fact, I would rather just share a folder with a novice web developer using sync than deal with the hassle and security issues of giving them SSH/SCP/Git credentials.

One quirk with BitTorrent Sync is that it doesn’t yet run as a service, it just runs in the system tray so if you log out (which is common on a server) then the app closes. To get around that until the feature is added I just setup the Windows task scheduler to launch the BTSync.exe app using my credentials when the system boots up. This sounds counter-intuitive but if you set it up this way you must uncheck the ‘Start with Windows’ box in the BitTorrent Sync settings so that it doesn’t try to launch again when you log in.

Best part of all of this is that it was just so quick and easy to setup. It took me less than 30 minutes to set it up on both machines, sync down about 1GB of websites, and update my three blogs. Check it out at BitTorrent Labs.

]]>
https://pseudosavant.com/blog/2013/07/18/simple-web-project-deployments-with-bittorrent-sync/feed/ 0 638
psMathStats 2.0 https://pseudosavant.com/blog/2013/06/14/psmathstats-2-0/ https://pseudosavant.com/blog/2013/06/14/psmathstats-2-0/#respond Fri, 14 Jun 2013 17:58:09 +0000 http://pseudosavant.com/blog/?p=618 javascript-icon.pngI just pushed the latest version of psMathStats to GitHub. It had been sitting on the shelf 95% done for probably a year, but it is out at last. There are some new methods and while the syntax is almost exactly the same, it isn’t a drop-in replacement for 1.0 as it now uses my ps namespace.

New Features

A full breakdown of all the methods and functions of 2.0 are available on GitHub, but these are some of the new features.

  • Array.sample
  • Array.histogram
  • Array.countByType
  • Array.percentile
  • ps.math.even
  • ps.math.odd
  • ps.math.product
  • ps.math.randomBetween
  • ps.math.randomNormal

Array.sample

Probably the biggest new feature is the ability to do sampling for any of the Array methods using Array.sample. It returns a randomly sampled array of any length less than or equal to the source array length. You can use it to quickly do calculations over large datasets (1MM+ rows) very quickly while sacrificing only a small amount of accuracy.

Here is an example:

// This will take a long time (many seconds) to run.
// Makes the browser become unresponsive
tenMillionRowArray.stdDev();

// Takes only a few milliseconds to run, and the
// returned value is almost exactly the same.
tenMillionRowArray.sample(20000).stdDev();

Suggestions

As always, if you have any other useful math or statistics functions you’d like to see implemented just drop me a line at Twitter or GitHub with some details on how to perform the calculation. Even better, just send me a pull-request with your implementation. ;)

]]>
https://pseudosavant.com/blog/2013/06/14/psmathstats-2-0/feed/ 0 618
Ellis’ Law of Software Projects https://pseudosavant.com/blog/2013/03/20/ellis-law-of-software-projects/ https://pseudosavant.com/blog/2013/03/20/ellis-law-of-software-projects/#respond Thu, 21 Mar 2013 03:45:02 +0000 http://pseudosavant.com/blog/?p=568 ellis-law-logoI have been involved in a lot software projects in my professional life. And no matter the organization there has always been two constants: pressure from “the business” to increase the scope, and some sort of deadline (tradeshow, quarterly goal, etc.) that “management” won’t bend on. Everyone knows you can’t have your cake and eat it too though and I’ll show you why, with what I humbly, call “Ellis’ Law of Software Projects”.

Ellis’ law is simply this: (Scope * Quality) / Resources = Duration. This ‘law’ isn’t something I have decreed to be true in projects I’ve been a part of. It is something I have observed; more like how Newton observed an apple falling on his head. There is immutable relationship between each component of the equation. If you change one component, then at least one other will change. Let me break this down further by sharing my definitions of each component.

Components of the law

Scope is simply the list of stuff to be built. It could be a marketing requirements document (MRD), a product requirements document (PRD), a Scrum product backlog, a BDD behavioral spec, or any other form of ‘requirements’ even if it’s just a list in the founder’s head.

Quality in this equation is expressed as an oversimplified 0-100% of the theoretical quality you could intend to achieve for a project. QA is always the first place people, especially management, want to trim when a project timeline gets tight. They never explicitly accept that they are allowing the quality to drop of course. In their mind the quality should stay the same; we usually do a lot of superfluous tests apparently. But even if you are comfortable with the resulting quality of the project, you will never be as certain the quality is as high as you would like.

Resources has a very broad meaning in this context. It represents anything that can be used to complete the project. It can be more team members, longer hours, improving the talent level of the team, faster computers, more monitors, you can even increase the resources by lowering the overhead (meetings, process, etc.) that takes up the team’s time. Once QA has been trimmed to the bone, the next step is always stealing resources from another team even though your team wastes 1/3rd of their time in meetings.

Lastly, the result of all of this is the duration of the project, which is pretty self-explanatory. In the primary form of the equation expressed above it is how long the project will take. It can also be the duration you desire if you are solving for one of the other components though.

Permutations of the law

ellis-law-durationYou don’t need to exactly calculate the law but I like the equation form of this idea. Using basic algebra concepts you can ‘solve’ for each of the different components. If you know your scope, quality, and resources then the duration is decided for you.

ellis-law-resourcesIf you know your scope, quality, and how long the duration can be then you know what kind of resources you will need to complete it on time.

ellis-law-qualityIf you know your scope, duration, and resources then you know what quality you will end up with.

ellis-law-scopeIf you know your duration, resources, and quality target then you know how large your scope can be.

tl;dr

So next time someone says you need to ‘accelerate’ some project because it has to be done sooner you know what you need to do. You need to figure out a nice way to either 1) ask for more resources, 2) ask them which features they want to drop, or 3) explain to them why you are lowering the quality bar. The choice is theirs.

]]>
https://pseudosavant.com/blog/2013/03/20/ellis-law-of-software-projects/feed/ 0 568
Bookmarklet: QR Code This Page https://pseudosavant.com/blog/2012/01/18/bookmarklet-qr-code-this-page/ https://pseudosavant.com/blog/2012/01/18/bookmarklet-qr-code-this-page/#respond Wed, 18 Jan 2012 15:54:07 +0000 http://pseudosavant.com/blog/?p=542 I have noticed myself wanting to pull up a page that I have on my desktop on my phone a lot lately. After doing this quite a few times I figured there had to be a better way. Enter the QR code bookmarklet.

If you aren’t familiar with bookmarklets they are little bits of JavaScript for doing things on the current webpage you are on that you run via a web browser bookmark. I used the excellent bookmarklet builder at subsimple.com to take my JavaScript and turn it into a bookmarklet. For a more in-depth understanding of bookmarklets check out the article on Wikipedia.

When you click on this bookmarklet it will create a QR code using Bit.ly and show it in the top right hand corner of the page. Then use your favorite QR code reader (my Windows Phone has a great one built into the search) and click on the link. You can make the QR code disappear by clicking on it.

To add this bookmarklet drag the link below to your bookmark toolbar. You can try it out to see what it does by just clicking on the link and it will run on this page.

QR code this page


]]>
https://pseudosavant.com/blog/2012/01/18/bookmarklet-qr-code-this-page/feed/ 0 542
Agile Scrum: eliminate “intellectual inventory” using Just-In-Time software development https://pseudosavant.com/blog/2011/01/29/agile-scrum-eliminate-intellectual-inventory-using-just-in-time-software-development/ https://pseudosavant.com/blog/2011/01/29/agile-scrum-eliminate-intellectual-inventory-using-just-in-time-software-development/#comments Sat, 29 Jan 2011 08:42:00 +0000 http://pseudosavant.com/blog/?p=470 model-tIn the software development world there are two main camps for the process of how software should be built: waterfall and agile (usually a form of Scrum). Agile is an iterative incremental process and waterfall is the classical sequential process. This sequential approach is full of inventory, and inventory = waste. As someone who has made software both ways I will show you why I prefer Scrum.

Waterfall

In a waterfall process the development flows sequentially downward through each phase of the model: requirement definition, design, implementation, verification, etc. Each of these steps are really a form of what I call “intellectual inventory”. The phases in the model change from company to company but this is basically what happens:

  1. An inventory of requirements is created before the product can be designed
  2. An inventory of design is built before engineers can code
  3. Engineering produces an inventory of untested/buggy code before QA can test
  4. QA builds up an inventory of defects for engineering to go back and fix
  5. Maybe user experience then builds a list of usability defects to go back and fix

Keep in mind that each phase in this process will be at least one month and usually quite a bit longer than that depending on the project. That means when you get to step 5 you have two choices:

  1. Don’t fix any issues and ship anyway
  2. Ship very late and go back to step 2

Neither option is very good for you, your customers, or your company. There is another industry that used to produce their products this way that we should learn a lot from: the auto industry.

Lesson’s from another industry

Cars used to get produced in huge batches as it was seen as inefficient to produce a variety of cars at the same time. Large caches of parts were held in inventory every step of the way to minimize fluctuation in supply and demand. This resulted in greatly reduced flexibility. In fact the Model T famously came in any color you wanted “so long as it is black.” This was the Ford way, and Ford was seen as the epitome of success with their assembly line production.

Then Toyota came along with their Toyota Production System (TPS) in the 1950s and changed the whole game. TPS is now more generically referred to as Just-In-Time (JIT) Production. In a JIT world inventory equals waste, by definition. Consider this paragraph from Wikipedia on a TPS principal known as Heijunka (production leveling):

“To prevent fluctuations in production … it is important to try to keep fluctuation in the final assembly line to zero. Toyota’s final assembly line never assembles the same automobile model in a batch. Production is leveled by making first one model, then another model, then yet another. In production leveling, batches are made as small as possible in contrast to traditional mass production, where bigger is considered better. When the final assembly process assembles cars in small batches, then the earlier processes, such as the press operation, have to follow the same approach. Long changeover times have meant that economically it was sound to punch out as many parts as possible. In the Toyota Production System this does not apply. Die changes (changeovers) are made quickly and improved even more with practice. In the 1940s it took two to three hours, in the 1950s it dropped from one hour to 15 minutes, now it takes three minutes.“

TPS actually does the exact opposite thing to minimize fluctuations in production. Even though this minimizes fluctuations it actually improves responsiveness to customer and market demands. Most cars are available in at least a dozen colors and have so many possible options (sunroof, satellite radio, GPS, power seats, engines, transmissions, etc) that there are potentially hundreds of thousands or even millions of possible versions of a very common car such as a Toyota Camry.

Hidden benefit of Scrum

Of the many advantages to Scrum which are commonly cited the most important benefit is implied but never really articulated and that is reducing what I call “intellectual inventory”. Software development can have many forms of this type of inventory. It can be any artifact (requirement/PRD, design document, untested code, etc) that is completely created before being passed off to another functional role.

The key steps of the waterfall process are actually all forms of Intellectual Inventory: requirement definition, design, implementation, and verification. In fact the waterfall model is basically building up inventories in large batches.

Creating requirements takes a lot of “non-development” time from team members like the Product Owner/Manager, User Experience, Design, etc. The more complete the requirements are the more investment you are making. It takes development time to implement something that doesn’t get tested. Even if it does get tested but the feature is too buggy to pass QA you have still invested in something with zero return; even though your engineers were “done” with their phase of the waterfall.  Make no mistake about it, intellectual inventory is something you spend investment on.

Intellectual Inventory = Waste

In an Agile/Scrum model intellectual inventory = waste as well. Requirements that are twelve months from being implemented don’t need to be refined to the point that the feature could be implemented today. A lot can change in a year and that feature could get dropped, expanded on, or maybe it will be delivered in a different way (iPad app instead of desktop client, HTML5 instead of Flash, etc) because users and the market change. Clearly any time spent refining the requirements that are discarded would have been a complete waste. Of course in practice it isn’t possible to completely eliminate inventory, but it can be greatly reduced.

Let’s go back to that quote from Wikipedia and see what it looks like when hypothetically applied to software now (changes in bold):

“To prevent fluctuations in production it is important to try to keep fluctuation in the final assembly line to zero. Facebook’s final assembly line never assembles the same product features in a batch. Production is leveled by making first one feature, then another feature, then yet another. In Scrum, batches are made as small as possible in contrast to waterfall, where bigger is considered better. … Long changeover times have meant that economically it was sound to create out as many features as possible. In Scrum this does not apply. Requirement changes are made quickly and improved even more with practice. In the 1980s it took two to three hours, in the 1990s it dropped from one hour to 15 minutes, now it takes three minutes.”

I have personally seen feedback from our user experience (UX) team make it into a shipped product in less than a month from when they did user testing on the product that was in development. That type of responsiveness is impossible in a waterfall model because the UX would be done in a batch that would then lead to a batch of requirements, and so on. Just as in automobile production, reducing batch production of intellectual inventory allows much more responsiveness to consumer demand and reduces waste.

Waste is probably actually worse in software development than the auto industry. Even if a batch of cars is built that ends up having low demand they probably won’t be sold for less than the marginal cost to produce them, so the inventory still retains much of its value. If a requirement or design is never built because consumers don’t want it then the value of that inventory is zero. All of that investment was lost.

Just one similarity

Using JIT concepts to reduce intellectual inventory is of course just one aspect of Scrum that is similar to the principles of TPS. Here are some others:

  • Self-organizing teams with daily status meetings and weekly retrospectives are just a form of Kaizen.
  • Ability for any team member to bring any issue to light at any time in the development process is also a similar to the role of an Andon system.
  • Using a “pull” system (as opposed to waterfall’s “push”) is basically Kanban.

So the next time you are trying to convince someone in your company of the virtues of adopting a “radical” process like Scrum point out how well these same principles worked for a plucky up-start named Toyota.

Full disclosure: I have an MBA (that’s how I learned about MBA-ish stuff like the Toyota Production System) and am a Certified Scrum Product Owner. ;)

]]>
https://pseudosavant.com/blog/2011/01/29/agile-scrum-eliminate-intellectual-inventory-using-just-in-time-software-development/feed/ 3 470
Amazon Prime: the web “Costco” membership https://pseudosavant.com/blog/2011/01/10/amazon-prime-the-web-costco-membership/ https://pseudosavant.com/blog/2011/01/10/amazon-prime-the-web-costco-membership/#comments Mon, 10 Jan 2011 07:58:31 +0000 http://pseudosavant.com/blog/?p=463 E9HeroLIt seems like I can hardly make it through a week without telling someone about Amazon Prime for the first time. It is quite simply one of the best things on the web. If you have nothing but extra time to run extraneous errands and/or wait weeks for free shipping then don’t read any further. But if you’d like to reclaim more time in your life then read on.

For those of you who have never heard of Amazon Prime before this is it in a nutshell. Prime is a $79/year* membership with Amazon that entitles you (and three members of your household) to unlimited free two-day shipping with no minimum order price on any product fulfilled by Amazon.com. If you are really impatient you can even get $4/item overnight shipping.

*Prime is free if you are a student or you buy diapers from Amazon.

“I usually get free Super Saver shipping anyway”

A lot of people I mention Prime to tell me that they already usually get free shipping at Amazon.com by waiting until they have $25 of items to buy. Here is why Prime is different and vastly superior though. First, you don’t have to figure out $25 of things to buy if you just need one $8 product. Second, Amazon doesn’t just have UPS/Fedex/USPS/etc ship the package quicker for Prime, they actually fulfill the order and get it to the shipper sooner.

Back in my pre-Prime days my Super Saver orders usually shipped 1-3 days after I placed my order. Now with Prime it almost always ships within 12 hours, usually less. Basically you get to cut in line just like people in the First-Class security line at the airport. So instead of waiting a few days to order, three days for fulfillment, and seven days for shipping it is now replaced by less than a day to fulfill and two-days or less to ship.

Many purchases can’t wait a week

There are a lot of things in life that I buy that I don’t need this very second which are impractical to wait a week or more to receive though. The Super Saver shipping may be fine for much of the typical Amazon.com fare (books, music, videogames) but it doesn’t work for these items. Here is a small sample of things I purchased in the last 12-months that arrived in two days that I wouldn’t have waited for:

The best part of getting all of this stuff delivered to my house? Each one of them would have required a 30-60 minute errand to go buy it, I could easily comparison shop and read product reviews, and Amazon.com was always cheaper than the brick-and-mortar store I would have driven to. So the member hip pays for itself and it gives me more free time to do other things that are more important/fun.

Do yourself a favor and sign-up for an Amazon Prime free one-month trial and see for yourself how much time and money you can save.

]]>
https://pseudosavant.com/blog/2011/01/10/amazon-prime-the-web-costco-membership/feed/ 1 463
JavaScript Statistics and Math Library https://pseudosavant.com/blog/2010/12/22/javascript-statistics-and-math-library/ https://pseudosavant.com/blog/2010/12/22/javascript-statistics-and-math-library/#comments Wed, 22 Dec 2010 07:23:46 +0000 http://pseudosavant.com/blog/?p=429 altRecently I started my own Google Code project to share some of the reusable code I have developed. There are lots of good general purpose JavaScript libraries such as jQuery or Closure but sometimes there are things that are out of the scope for these types of libraries. One of those things is basic math and statistics operations.

Update: Version 2.0 is out now. Check it out here.

This deficiency became apparent to me while coding something for work where I wanted to use JavaScript to calculate some basic math and statistics: mean, variance, standard deviation, etc. There weren’t any good libraries or code snippets I could find to do these functions easily.

I ended up throwing something together that got the job done but later decided that I could make it a lot more simple and reusable. Turns out that JavaScript’s prototyping capability is perfect for these types of operations. It turns Math.max.apply(Math, myArray) into myArray.max().

These are the methods I have implemented so far:

  • Array.prototype.sum(): returns sum of all array values
  • Array.prototype.min(): returns the lowest numeric value of an array
  • Array.prototype.max(): returns the highest numeric value of an array
  • Array.prototype.mean(): returns the arithmetic mean of an array
  • Array.prototype.median(): returns the median of an array
  • Array.prototype.sortNumber(boolean decending): returns array sorted ascending, or descending if sortNumber(true)
  • Array.prototype.variance(): returns the variance of an array
  • Array.prototype.stdDev(): returns standard deviation of an array
  • normsinv(p): returns lower tail quantile for standard normal distribution function

These are just some basic Excel-type of functions. I am open to adding more functionality to the library, so if you would like to contribute some code or just have a suggestion for a useful function then please leave comment.

Download the script and examples here. You can also link against the minified version here if you would always like to use the latest version.

]]>
https://pseudosavant.com/blog/2010/12/22/javascript-statistics-and-math-library/feed/ 5 429
I’m back. https://pseudosavant.com/blog/2010/10/20/im-back/ https://pseudosavant.com/blog/2010/10/20/im-back/#comments Thu, 21 Oct 2010 03:43:40 +0000 http://pseudosavant.com/blog/?p=383 It has been just over two years since my last post. I have had many intentions of writing on my blog but apparently always found something else to do. So what have I been up to?

I started working at DivX as one of the product managers on the consumer software team. It has been a lot of fun and I have been able to take the lead on a lot of interesting projects. My products are the Codec Pack, Converter, and Web Player. The digital video space has been a really interesting place for the last couple of years and we’ve been able to turn some big key threats (Windows 7 and HTML5) into big opportunities through our software (Codec Pack and Web Player, respectively). The Web Player in particular is something that I have spent a lot of time on over the past year. We recently released a beta that introduces two new features that I am particularly proud of.

First, it now supports HTML5 API for <video> which is something I have been following for a long time. I am really glad to see HTML5 finally getting some traction but the one area where things are kind of still a mess it in the <video> space. There isn’t a consistently supported format across the major browsers yet, and some browser have a pretty low quality of playback. We are helping to alleviate this by delivering a HTML5 <video> platform with consistent support for multiple formats (H.264, MP4, MKV, MOV, and DivX), all in very high quality with hardware acceleration (when available), on Windows and Mac for Firefox, Chrome, and  even Internet Explorer. Users just have to have our plugin installed and it will support standard HTML5 <video> markup. Check out a little HTML5 demo I made to see how it works.

Second, we introduced a new feature we are calling DivX HiQ. It allows you to choose to use the DivX Plus Web Player on popular sites like YouTube and Vimeo instead of their default Flash players. As the Web Player is solely focused on video unlike Flash it offers a much better experience with dramatically lower CPU and power consumption. Don’t just take my word on it though see what users are posting about it on our forums.

I have also been doing a bit of programming since I’ve been really getting involved in a lot of web/HTML5 stuff through my work. I made an “Instant” search using Bing’s AJAX API which was novel until Google Instant came out. It is a fun project and it works really well on mobiles. I also created a Google Code project for a JavaScript statistics library (I hope my Purdue professors are proud :) and a Silverlight audio player that supports the HTML5 audio API that I made. I’ll probably blog about them more later.

That is a sampling of what I have been up to in the tech world. I will try to post more regularly (once or twice a month) about what is happening in HTML5, media, and the mobile landscape.

]]>
https://pseudosavant.com/blog/2010/10/20/im-back/feed/ 2 383
The Seven Ways Each “Next-Gen” Console Succeeds https://pseudosavant.com/blog/2008/08/28/the-seven-ways-each-next-gen-console-succeeds/ https://pseudosavant.com/blog/2008/08/28/the-seven-ways-each-next-gen-console-succeeds/#comments Thu, 28 Aug 2008 16:00:00 +0000 http://pseudosavant.com/blog/?p=372 Xbox-360 I knew I’d get a lot of complaints over yesterday’s post on “The Six Ways Each ‘Next-Gen’ Console Fails“. Even though I love to critique everything, there are still things I like too though. Here are seven ways that I think each of the current “next-gen” consoles succeed. This is just my take on it, what do you guys think? What do you like the most?

Xbox 360

  • Xbox Live: Does anything really need to be said about this? Cross-game chat, invites, picture/text/video messaging, unified friends list, single sign-on account, etc. It is the pinnacle of a complete online gaming experience that is easy to use.
  • Xbox Live Arcade and XNA: Microsoft have really created an awesome solution for bringing down the barriers to game development. I love the classics like Street Fighter II and the new originals like Geometry Wars.
  • HD Out Of The Box: It is great how the Xbox 360 (with the exception of the arcade model) comes with cables to play HD right out of the box. No dealing with some Best Buy employee trying to sell you on the $2000 Monster HDMI cable for “better digital frequency response modulation support implementation colors”.
  • Fast Disc Drive: Every good PS3 game requires an install now, but the 360’s drive is plenty fast to just drop in and play. I don’t prefer having to switch discs (which hasn’t happened on the 360 for me yet) for a game, but I don’t know how that is any worse than having to wait through an install before every act on Metal Gear Solid 4 for the PS3.
  • Wireless Controllers: It is easy to take this for granted now that every system has them, but it is really nice not to have to deal with a bunch of wires. Especially when you have four people playing on a console at once.
  • Choice: Probably my favorite aspect of Microsoft’s approach to gaming is having options. I could buy an Arcade model and latter add HD cables or a hard drive and have the Pro level experience. I can rent/buy content from various online sources (Amazon Unbox, CinemaNow, and soon Netflix) other than Xbox Live. I can install games if I want too (soon) or I can just drop in the disc and play.
  • Multimedia: You can play and control your music (even from an attached iPod) even while you are in a game. It supports a lot of formats (H.264, DivX, Windows Media, MPEG-2, MP3, and AAC). Xbox Live has HD movie and TV content. The Media Center Extender functionality is awesome. I can be watching a TV show and accept an invite to play Halo 3 and it will switch right to the game (Halo 3 is usually in my drive :). It has a built-in IR “eye” for my Harmony too.

Playstation 3

  • Quiet: There is no doubt about it, the PS3 is the quietest “next-gen” console out. It is probably quieter than my first PS2 actually. Sony really hit one home on not being a audible nuisance.
  • Downloadable games: Sony is really leading the way with full-size game downloads being simultaneously released on disc and online; only Steam does it better. Not only is it convenient to just download a game, but then you don’t need to switch discs to play a different one.
  • Multimedia / Blu-ray: Like the 360 it has broad media format support and downloadable movie and TV content, but it also supports Blu-ray HD movies. In fact, it is still really the only Blu-ray player worth buying. Without the PS3 it is likely that Blu-ray would have lost the format war.
  • Built-in wired AND wireless networking: It is nice to have both options. The wireless-G is great for most people, but I can still use a wireless-N bridge if I need too (I do on my 360 for HD streaming).
  • Uses standard hard drives: It is really nice that you can purchase any 2.5” SATA hard drive to expand the storage of the PS3. Sony realizes that it should make money selling content (games, movies, TV, etc) not proprietary storage to hold it.
  • Web Browser: This is something that I found a lot more useful than I was expecting. It is really a pretty good browser that even supports most Flash content and the controller is utilized really well for navigation.
  • Wireless Controllers: Now that rumble is back in the form of the Dualshock 3 controller the PS3 has a proper wireless controller. The only thing I’d fix would be to include a bit longer recharging cable, but at least it uses standard USB connectors so I could always buy one.

Wii

  • Different Kind Of Controller: While I personally don’t like how the Wii’s controllers work, it obviously appeals to a lot of other people, especially “non-gamers”. I know the Wii is the first console my sister ever wanted. Hopefully this will help break game design out of the repetitive use of the same control schemes we’ve seen for some time.
  • Low Launch Price: You can’t under-estimate how much price has played into the success of the Wii. It definitely reaffirmed where the volume is in the market and I’m sure that will play into Microsoft’s and Sony’s future console plans; that is good for gamers.
  • Included Game: The Wii is a major throwback to an era where a game was included with every system, and that game was a major driver for sales of the console. Wii Sports is still one of the best games for highlighting the Wii’s strengths while effectively managing its weaknesses. How many launch titles can usually pull that off?
  • Wireless Controllers: It had to be mentioned for the Wii too right? Due to the use of Bluetooth for the Wii-mote, it has actually become quite useful for some really cool homebrew hacking of motion sensing software too.
  • Mii’s: What a great way to personalize your gaming experience. they aren’t just an icon for your profile but are actually playable in your games. The Mii concept is so great that Microsoft is even copying it now with their avatars.
  • Backwards Compatibility: The Wii is the only system that still completely supports the last-gen games. It even supports the old Gamecube controllers.
  • Web Browser: The Wii-mote makes a great control device for using the web on a TV. Built-in Flash support is really great too. Although it isn’t included with the console, $5 is a very reasonable price.
]]>
https://pseudosavant.com/blog/2008/08/28/the-seven-ways-each-next-gen-console-succeeds/feed/ 5 372
The Six Ways Each “Next-Gen” Console Fails https://pseudosavant.com/blog/2008/08/27/the-six-ways-each-next-gen-console-fails/ https://pseudosavant.com/blog/2008/08/27/the-six-ways-each-next-gen-console-fails/#comments Wed, 27 Aug 2008 14:56:09 +0000 http://pseudosavant.com/blog/?p=364 Nintendo Wii People sometimes think that I only critique non-Xbox game consoles, but the truth of the matter is that I critique everything I use. Here are six ways that I think each of the current “next-gen” consoles fail. The order goes from most familiar console to least (I don’t own a Wii personally), and from biggest failure to smallest. This is just my take on it, what do you guys think? How would you change them?

Update: I posted a follow-up The Seven Ways Each “Next-Gen” Console Succeeds

Xbox 360

  • HD-DVD: It should have come built-in. It would have allowed more room for games, disc-based HD movies, and the format may have actually survived if one of its two biggest backers actually got behind it.
  • Better storage options: It should come with a much larger hard drive (especially at launch) and the ability to swap in standard 2.5” SATA drives. It would be really nice if it had the ability to use any standard USB key or SD card for a memory card too. The day of proprietary memory cards really should have ended already.
  • Headset: Why aren’t the audio controls built into the controller so that any headset would work? I should be able to mute my wireless headset without taking my hands off the controller.
  • Quieter: I know the DVD drive spins fast so that load times are shorter but some sound insulation would be nice. Maybe installing onto the hard drive will help this, but see bullet #2.
  • Web Browser: I could understand not having a browser when the console launched, but the time is far overdue for the 360 to get the web. The PS3 and Wii each have it, and it is more useful than I would have guessed.
  • D-Pad: For a console with so much classic and arcade content it is ridiculous how bad the D-pad is. I really think that they’d sell more XBLA games if it didn’t feel so wrong to play classics like Street Fighter II with that D-pad. It looks like they may actually fix this soon.

Playstation 3

  • Better UI than the XMB: Sony, would it kill you to use some icons with color? After all, it is one of the key ways for people to quickly identify items (see #6). The XMB is about as intuitive a way to tie together hierarchal lists of functions and data as any OS’s file system. Playing music should not use the exact same UI as your friends list.
  • Built-in IR “eye”: It may seem petty, but it drives me nuts that every AV device I own is controlled by my slick Harmony universal remote except for the PS3. If it really wants to be the center of my entertainment it needs to work with my existing stuff.
  • More memory for textures: The low quality textures so often seen on the PS3 are really the only thing keeping it from surpassing the Xbox 360’s graphics. Low quality textures were the only thing that marred the otherwise amazing graphics of Metal Gear Solid 4 for me.
  • Cables: Sony, get with the program and start including component HD and/or HDMI cables in the box. It is so nickel-and-dime to sell a $500 console based on its ability to play HD games without the cables to do it.
  • Memory Card Readers: Drop ‘em, they probably cost more than an HDMI cable but almost nobody uses them.
  • Real motion sensing: It is either all or nothing for me on this one, so Sony really shouldn’t have even bothered because Sixaxis is thoroughly mediocre. Even PS3 exclusive developers are saying that it is “useless”. It’d probably save Sony some money on making controllers too.

Wii

  • Lackluster motion control: FYI, I really don’t enjoy playing the Wii. It isn’t that the concept is bad, I think the concept is great. The motion control is just so terrible though. For proof look no further than the existence of a first-party add-on to fix the motion control.
  • Weak GPU and CPU: The graphics and CPU power should have been at least 50% greater than the original Xbox, and it should be able to output video in 720p. I’m not saying it should compete with the PS3/Xbox 360, it just needs to be better than all of the last-gen stuff. Low quality 480p looks pretty bad on those fancy flat-panel TVs. :)
  • Friend Codes: Here is a tip Nintendo, look at what Microsoft is doing with Xbox Live and copy it.
  • Media support: It is really ridiculous that the Wii doesn’t have legitimate support for playing your pictures, music, and videos. You can do some of it, but it can’t do it over your network so I hope you have a lot of big SD cards. I’m sure that would be a popular feature for their target market.
  • Better storage options: It is clear now that the built-in storage on the Wii is lacking and that Nintendo still doesn’t have a fix lined up yet. Why not increase the built-in storage and/or allow memory cards to (actually) expand the console’s usable space.
  • Wired Networking: Sometimes you just want the reliability of a wire. The lame duck Wii WiFi also requires a “long preamble” (usually only required for old wireless-B devices) which kills your wireless-G and N performance.
]]>
https://pseudosavant.com/blog/2008/08/27/the-six-ways-each-next-gen-console-fails/feed/ 25 364
Digg’s Dupe Checker: Totally Original I Swear https://pseudosavant.com/blog/2008/08/11/diggs-dupe-checker-totally-original-i-swear/ https://pseudosavant.com/blog/2008/08/11/diggs-dupe-checker-totally-original-i-swear/#comments Mon, 11 Aug 2008 15:00:33 +0000 http://pseudosavant.com/blog/?p=278 Digg-GuyCould Digg’s duplicate checker be any worse? It constantly asks me if my original posts are duplicates. When I submitted my post about “Five Firefox Extensions That Should Be Built-In” here are the “possible duplicates” Digg found. Tell me how close you think these submissions are to mine.

Digg’s “Possible Dupes” for “Five Firefox Extensions That Should Be Built-In

Honestly, I can’t figure out what kind of “magic” is going on in their algorithms that would connect my post with any of these. “Itches You Shouldn’t Scratch” or “Can English Be The Official Language” really?

digg-dupe-checker

The really funny part was that two of the “possible dupes” it suggested were actually submissions of the exact same link, with the exact same title. Apparently their dupe checker didn’t help out on that one. When I checked on submitting some of my older posts that had already been submitted to Digg from a site I used to cross-post at it didn’t suggest a single possible duplicate. Go figure.

This is just reason number 53 for why I don’t really use Digg anymore. Here’s a screen cap of the full dupe checker results if you’d like to see them yourself.

digg-totally-original-I-swe

]]>
https://pseudosavant.com/blog/2008/08/11/diggs-dupe-checker-totally-original-i-swear/feed/ 2 278
Media Center: It’s Official…Wait Until Windows 7 https://pseudosavant.com/blog/2008/08/08/media-center-its-officialwait-until-windows-7/ https://pseudosavant.com/blog/2008/08/08/media-center-its-officialwait-until-windows-7/#comments Fri, 08 Aug 2008 19:27:20 +0000 http://pseudosavant.com/blog/?p=352 Vista-Media-Center I wrote about Windows Media Center TV Pack yesterday and how it looked like Microsoft was going to royally botch it up. At the time it was all hearsay, but not any more. It looks like Microsoft decided to move up the announcement date for the TV Pack from next month to today. This is a situation where nobody comes out a winner.

A Lose-Lose Outcome

It is easy to look at the end-users and realize why we’d all be upset. The TV Pack is nothing like it was originally anticipated it would be, and there is no official channel, support, or upgrade path for anyone other than to buy a new PC. Who wants to buy a new PC just to get a software update?

Microsoft’s partners are also losing out on this one. I personally had been waiting for DirecTV support and was ready to switch to satellite as soon as it arrived. It would have been really nice to have a more integrated fully-digital solution, but it looks like I won’t be switching now. All the issues surrounding CableCard will probably be ironed out by the time DirecTV is on Media Center, or Duke Nukem Forever comes out.

You might think that OEMs would appreciate being the only channel to get the new software bits, but how many people are going to trust Microsoft or an OEM with supporting any product they buy? It’s hardly an incentive to buy that new Windows Vista machine you’ve heard about in the ads. In my book Media Center is a complete lame duck now that will never flourish.

It’s Apple’s Move Now

Apple-logo This also further illustrates the commonly held view that people should just wait for Windows 7 because that is exactly what these actions are saying. While we wait for Windows 7 Microsoft is just going to leave the door gaping open for Apple to come in and steal the digital living room. Honestly, even AppleTV’s history hasn’t been as bad as Media Center’s.

Keep in mind, I actually likes Vista and Media Center. Dictating moves like this to your users is straight out of Apple’s playbook. I hate it when they do it, and I hate it when Microsoft does it too. I just can’t believe how many of the usual sources aren’t running with this story.

]]>
https://pseudosavant.com/blog/2008/08/08/media-center-its-officialwait-until-windows-7/feed/ 1 352
Microsoft Wants Media Center To Fail…I Swear https://pseudosavant.com/blog/2008/08/07/microsoft-wants-media-center-to-faili-swear/ https://pseudosavant.com/blog/2008/08/07/microsoft-wants-media-center-to-faili-swear/#comments Thu, 07 Aug 2008 15:00:00 +0000 http://pseudosavant.com/blog/?p=339 Vista-Media-Center You may have noticed that I regularly tout Media Center as one of my favorite features of Windows Vista. I have even been using/loving it as my sole DVR for about three months now. It should come as no surprise then that I have been following the next iteration codenamed Fiji quite closely. While the software sounds good for the most part, I can’t understand why it seems that Microsoft is trying to make sure Vista Media Center (VMC) never takes off.

Overall I find the concept, and to a large extent the execution, of VMC to be awesome. Here are some the strengths of VMC:

  • No DVR fees to the cable company (or Tivo)
  • Comes built into Vista
  • Easily share the DVR through extenders (of which there is already a huge base of Xbox 360s)
  • Top notch user interface (most of the time)
  • Portable recordings

I especially like that the recordings are just files that I can play on my laptop or stream over the Internet (via Orb) when I travel. You can also easily sync and automatically transcode recordings to WMP-compatible media players, Zunes, and even Windows Mobile devices. I do wish they’d develop/release a softsled (software-based extender) though.

Vista Media Center TV Pack

Microsoft are set to announce the “Vista Media Center TV Pack” formerly codenamed Fiji at next month’s CEDIA Expo. It will bring welcomed features such as proper native QAM support and heterogeneous tuner support; both of which I’ve been waiting for. While many were expecting features such as support for H.264 and DirecTV, and the ability to have widescreen thumbnails, no such features are showing up in tester’s hands. Honestly, overall it is a complete disappointment. Not just because of the software, it is the delivery too.

Epic Fail

It gets ugly when you start to look at how you can get some TV Pack goodness for yourself. First problem, you can’t upgrade to it. Apparently a fresh install is required; just what I want to do with a system that is setup how I like it. Second, it is only available through OEMs! But wait it gets worse. Third, all the OEMs have said they are only planning on supplying the TV Pack with new computers.

Let me get this straight, so because I bought and installed Vista myself, a very common scenario for most current media center users, I don’t get access to a key update to an included component of the OS? And Even if I had bought my HTPC through an OEM, they aren’t going to support the product further? Who is making these decisions and how do they sound right to them? As if I didn’t feel like my copy of Vista Ultimate wasn’t completely lacking anything Ultimate about it already.

It must be awful to be one of the developers working on Media Center at Microsoft. So much work into a great product only to have it destroyed in the marketplace due to bad business decisions. The many VMC users out there are pretty loyal but we will only take so much. It is like we are continually waiting for the next installment to really make it all right (satellite support, good digital cable support, broad codec support, softsled, built-in place shifting, etc). Microsoft is lucky my DVR options are so bad to begin with, but that won’t last forever.

]]>
https://pseudosavant.com/blog/2008/08/07/microsoft-wants-media-center-to-faili-swear/feed/ 13 339
The Problem Of Free: Why Charging For Xbox Live Is Good https://pseudosavant.com/blog/2008/08/06/the-problem-of-free-why-charging-for-xbox-live-is-good/ https://pseudosavant.com/blog/2008/08/06/the-problem-of-free-why-charging-for-xbox-live-is-good/#comments Wed, 06 Aug 2008 14:00:14 +0000 http://pseudosavant.com/blog/?p=321 Xbox-360A common complaint about Xbox Live is that Microsoft is charging for something that you get for free on any other platform (PC, PS3, Wii). For many people free is their favorite four-letter word, and it is just a price you cannot beat. The inability to charge for online services of any sort (read: not just gaming) is a major problem though. This isn’t just about games, here’s why.

Money Isn’t The Root Of All Evil, It Pays My Mortgage

I don’t know exactly how this all started, but the very thought of paying for any service online is almost unthinkable for most people. Microsoft made a bold, but I think smart, choice when it decided to make the Xbox Live Community Games a marketplace of buyers and sellers; a place where creators can be rewarded for their work and aren’t pressured to work for free. After all what is really so bad about paying someone for something they do?

With the Xbox 360 (or any other console) you have people paying $300-$400 for a console, buying multiple $60 games, extra $40 controllers, and then they are going to balk at paying $45 for 13 months of service? That is only $3.46 per month. Pretty reasonable considering most MMOs cost about three times as much. Do people think that it just goes straight into Steve Balmer’s personal bank account or something?

The money really goes to pay people (regular ones, just like you and I) that work to create the hands-down easiest, most seamless, integrated, and arguably best online gaming experience available. I don’t think it is a coincidence that the only online gaming service you pay for was found to be the best, even amongst PS3 and Wii users, in a recent poll. Obviously this would be a different discussion if the Playstation Network was overwhelming seen as the top platform, but it isn’t.

You Don’t Get What You Don’t Pay For

There is an old adage that says “You get what you pay for.” I use the converse of that statement mostly though, “You don’t get what you don’t pay for.” I’m sorry if this sounds like Econ 101, but in a market driven economy paying is a crucial method of voting (signaling) for what you want so that people will build it. There is essential information inherent in a paying transaction that you approve of what someone did, and that they created value beyond what you are paying them.

If you have a situation like is common on the Internet today, the people who pay are actually the advertisers. So many sites and services are slaves to their advertisers because their users won’t pay a dime. I have asked many people I know who live and die by Facebook how much they would pay for it, and they all said zero. They all spend at least an hour a day on it, but it is apparently worthless to them, and much of what they want is never conveyed in any meaningful transaction.

So Facebook becomes a slave to advertising and pimping out their users’ information for every cent they can get. It isn’t unrealistic to think that if people paid for more services that their personal information wouldn’t be shared quite so freely. The sites don’t work for you though, they work for the advertisers. I’m not saying all online services or sites should shun advertising, but it is ridiculous how much the solution to every Web 2.0 business model is advertising.

In many ways the Internet has been one of the greatest economic tools of all time. Viable marketplaces will have to be developed as more and more things are done online though. Almost everyone shops online for tangible products, but something really needs to be done to make intangibles not a solely advertiser sponsored economy.

]]>
https://pseudosavant.com/blog/2008/08/06/the-problem-of-free-why-charging-for-xbox-live-is-good/feed/ 22 321
The Value Of Open Platforms (aka Why I Don’t Own An iPhone) https://pseudosavant.com/blog/2008/08/05/the-value-of-open-platforms-aka-why-i-dont-own-iphone/ https://pseudosavant.com/blog/2008/08/05/the-value-of-open-platforms-aka-why-i-dont-own-iphone/#comments Tue, 05 Aug 2008 15:00:08 +0000 http://pseudosavant.com/blog/?p=312 iphone3g_appstore I have recently been in the market for a new smartphone. The iPhone looks like some nice hardware and I’m already an AT&T customer, but after seeing news like this I’m just not buying. Apple has proven to me that I don’t want to live in a closed ecosystem. Sometimes it really is true that “you don’t know what you got ‘till it’s gone.”

BTW, I really didn’t want to post anything pro-Microsoft or anti-Apple today, but this was the news I was dealt. :)

A Palm Refugee

Basically you could say that I am a long time Palm user that is growing increasingly impatient. I like the ease of use and efficiency of the PalmOS UI, but the under-pinnings are really starting to show their age. This has been made very apparent by adding a data plan to my phone recently.

I like having the access a lot more than I would have expected; Opera Mini is a great browser but the Java VM that runs it isn’t so much (it crashes regularly). Add on the lack of native Bluetooth A2DP (which my car’s audio system does support), a so-so email client, and Palm’s tardiness with a new OS and you can see why I’m looking for something better.

Honestly I have to admit that the iPhone is probably the best device right now for what I want (strong multimedia, great web browsing, good email client, decent form factor), although it is far from perfect (the phone part isn’t amazing, no built-in search, short battery life with 3G on, no A2DP, etc). So why am I not buying it?

My Apple Epiphany

I must confess that I generally don’t like Apple, and that I think their products are over-hyped most of the time (“Apple is reinventing the home stereo with the new iPod Hi-Fi” –Steve Jobs) but they generally make some good products. The iPod, Macbook Pro, Macbook Air, or Mac Pro are all legitimate top-of-the-line competitive products that most companies’ products do worse than. I realized what my real issue with Apple is though: their business practices.

This is further exacerbated by the fact that when you go Apple your choices are mostly dictated to you by Apple (aka Steve Jobs). Why will Adobe’s CS4 suite be 64-bit only on Windows? An Apple business decision. Why is the iPhone only available on AT&T? An Apple business decision. Why couldn’t .Mac users wait until MobileMe was stable to switch their e-mail over? Again, an Apple business decision.

The problem is particularly pronounced on the iPhone as it is an insanely closed platform (without jailbreaking it). It is like the iPhone is nothing but a DRM device, because basically it is. Lock down my music, check. Lock down my videos, check. Lock down my service provider, check. Lock down my choice of applications, check. Pretty much anything you can do with it is locked down.

Open Platform != Open Source

Don’t confuse an open platform with open source. Windows, PalmOS, Symbian, and even Mac OS X are all basically open platforms (but clearly not open source). You can run any app designed for the platform whether it is specifically blessed by the developer of the platform or not. If Windows or Mac were closed platforms you couldn’t make a third-party application like Firefox because Microsoft and Apple both already have competing web browsers. Look on the iPhone though and you’ll see that Apple won’t let any developer make a competing media player. See the difference?

I have numerous third-party apps on my Treo 680: Google Maps, Opera Mini, Gmail, Pocket Tunes, Facebook, a dictionary, etc. It may seem funny, but it would really bother me to have Apple deciding what I can and cannot use. Simple things like the program I use to track my gas mileage are switching costs to me if there isn’t a viable alternative on a new platform. After Apple’s trend of pulling Apps from iTunes lately I really can’t say I trust them.

Technically there are Windows Mobile 6 phones that have all of the features I want (A2DP, Opera, 3G, wifi, real multitasking) but I just don’t think I could stomach the stodgy UI. So I guess I’m left waiting to see whether Android materializes into something good, Palm can finally bring out their new OS, or hope that Windows Mobile 7 has a new UI, because those will all happen before Apple truly opens up the iPhone.

]]>
https://pseudosavant.com/blog/2008/08/05/the-value-of-open-platforms-aka-why-i-dont-own-iphone/feed/ 23 312
Mojave: An OS By Another Name Just Wouldn’t Be The Same https://pseudosavant.com/blog/2008/08/04/mojave-an-os-by-another-name-just-wouldnt-be-the-same/ https://pseudosavant.com/blog/2008/08/04/mojave-an-os-by-another-name-just-wouldnt-be-the-same/#comments Mon, 04 Aug 2008 14:00:42 +0000 http://pseudosavant.com/blog/?p=275 Vista2For those of you who maybe haven’t heard about Microsoft’s latest OS “Mojave” you should check out their website for it before reading any further. Even if you’ve already heard about Mojave you owe it to yourself to check out the videos on their site before you read any further.

What is up with the “blogosphere” on this one? Some of the titles would make you think that Microsoft lied about what the software could do when really the only “lie” they told was what the name of the OS is. So I don’t know how Microsoft lied to make them like it. The people in the videos obviously really liked it.  I personally like Vista, but I was genuinely surprised by how much some of these people just fawned over it. They were that impressed.

My History With Vista

I was really skeptical of Vista at first myself. I had tried out the beta versions and hated every single one. Literally the only reason for why I switched my desktop over to Vista was so I could do Media Center on my Xbox 360. My desktop is basically a file and print server for our laptops so I didn’t really care if it wasn’t that great so long as that stuff worked. However, within about a month of having Vista on my desktop I switched over my laptop, and a couple of months later my wife’s got switched too.

I should mention that I didn’t switch over until Vista had been out for about six months, so I missed out on the launch-day issues, but I never switch over to a new operating system when it comes out. No matter who makes it, new OSes always have some somewhat significant bugs or quirks. I would have probably hated Vista in January of 2007, but Vista in August 2008 is a different story. When I saw ExtremeTech defending Vista I knew the tide was turning for Microsoft.

For more of my ramblings on Windows Vista and XP check out this post.

]]>
https://pseudosavant.com/blog/2008/08/04/mojave-an-os-by-another-name-just-wouldnt-be-the-same/feed/ 1 275