Uncategorized – PseudoSavant https://pseudosavant.com/blog The Musings of Paul Ellis Fri, 28 Feb 2014 07:54:23 +0000 en-US hourly 1 https://wordpress.org/?v=5.6 4146239 Meta “When is it stealing?” Inception. https://pseudosavant.com/blog/2014/02/27/meta-when-is-it-stealing-inception/ Fri, 28 Feb 2014 07:50:25 +0000 https://pseudosavant.com/blog/?p=692 Shane HanselmanAlmost anyone who has a blog has found the entire contents of their blog copied on various blogs around the world. A startup has taken this wholesale stealing borrowing republishing syndicating to a new place: professional narration. Unfortunately it takes <sarcasm>a lot of work</sarcasm> to figure out how to ask for permission to reuse someone’s content, as Scott Hanselman recently found out.

If you are interested in the narration of Scott’s blog post about narrated blog post then you can download the MP3 or check it out the YouTube video below. I figure it is ok that I give this content away since it would be unusually difficult for me to figure out how to ask permission first.

For those who would rather read a transcript of the narration of his blog post about narrated blog posts I have included the automatic closed captions that YouTube generates below:

when is it stealing as posted on handsome and dot com written by Scott Hanselman anything you put on the Internet is going to get stolen its prefer a beloved shared a link to a but often he gets copied and copied again RSS is magical but it makes it even easier to programmatically syndicate copy content church around and you’ll likely find complete copies of your entire blog mirrored in other countries there’s so many web sites now media empires that have taken aggregation to the extreme giving it the more palatable name content curation not to be clear I respect the work involved in curation sites like dumbest weeds dot com require work and attribute creators but taking a post copying unique content even paraphrasing and then including a small Inc just as in kind forget about the legality of it remembering I N O but it’s just porn etiquette to not ask permission before using non Creative Commons content every week or two I got an email from some large aggregation site it says we’d love to reprint your post it’ll get you more readers the few times I’ve done is they’ve gotten 50,000 views and I’ve gotten three hundred referral views likely because the original appeared on handsome and dot com link at the bottom is in 4.5 aunt sites like buyer on over and BuzzFeed are effectively reblogging and embedding machines powered by link bait copywriters what happened next will shock you even if you make a piece of software someone may just wrapped slash embed your installer with their own installer and all the whole business around it me reading your blog posts today was pointed out to me that in nearly seven year old and not very good blog post a mine had been narrated effectively turned into a podcast by a startup called mono by the way it’s more than a little ironic that my post wasn’t even mine it’s an excerpt published with permission a friend Patrick Caldwell’s larger post I’ve used to you mono developer tools and embedded in a rated version here first let me just say that this is essentially a great idea it’s the opposite of transcribing a podcast it’s creating podcasts from existing content using professional narrator’s not just text to speech could be great for not just the visually impaired but also for anyone who wants to catch up on blogs while commuting where the content come from here’s a screenshot of the post on a mano site you can see my name handsome man is there but it’s not a link the headline is a link but you never know until you have read over it there’s really no easy way to tell where when and how this content came about I think that Amano could easily redesigned the side to put the content owner front-and-center podcast an audio snippets from blog post great idea except I wrote the script for this podcast and if I wrote the script and made it the narration then this must be a partnership right however if we look at two my nose own terms abuse so three claims no ownership or control over any of the content you post to the service your User Content you or third party licensor as appropriate retain all copyright patent and trademark rights to any of the content you post on or through the service you are responsible for protecting those rights okay so they don’t own the content by posting your User Content on or through the service you grants 03 a universal non-exclusive world the free license to reproduce adapt distribute and publish such content to the service and for the purpose of promoting so three and its services I’m pretty sure I have been granted them a universal license to my content as I didn’t seem at this link on their homepage it says that you tell us what article should be voiced the community submits links sometimes the content there a fan of but don’t own then mono narrated you may not aggregate copy or duplicate any so three content week but I can’t copy their content their content that was generated from my content does this mean I can get a book from the library narrated turn into a podcast Adam on a lab at chance Orman I’m fairly sure audio book creators get permission from the original authors I’m told by tomatoes Twitter account that on the first person to object to the content being copied without permission Scott Hanselman Adam on lap a longtime proponent accessible content but surely I’m not the first person offended by discovering their content copied at chance a min the feedback we’ve been getting from bloggers as they appreciate the distribution plus value-added you are actually the first I certainly don’t but d’amato is malicious mottos perhaps naive if they think they can narrate blogs without someone speaking up that said their narrator’s are top notch and our site now for both attractive and usable frankly I’d be happy if they narrated by all blog or at least the good stuff and not a lousy decade-old stop made a podcast feed my blog like their competitor cast if I but I’d like a model to do it with me sites like this should ass creators first and their business model should be based on partnerships with content creators assumptions stitcher has the right idea I’ve submitted my content to them and entered into a partnership that in just suck my podcasts and make a radio station even a single email from a monologue hey we would like to hear your blog click here in San this little form would have been sufficient married first ask questions later Michael dunbar knows with this Tweet advancement Adam on a Web kami typically English but the whole thing could have been avoided with manners that in easily solved problem and it’s not just a problem a tomato this applies to all businesses and start-ups that rely on content created by others I think it’s important honor attribution this isn’t about money recon copyright all those things to apply rather this is about netiquette when you’re building a business model built around partnerships and transparency assumptions around fair use and copyright ask first what are your thoughts dear reader

]]>
692
Craftsman https://pseudosavant.com/blog/2014/02/13/craftsman/ Fri, 14 Feb 2014 07:48:14 +0000 https://pseudosavant.com/blog/?p=683 avatar-head-white-on-alpha-120There is a trait I have had for a very long time which I only recently consciously realized. It is that I aspire to be a craftsman*. Wikipedia describes a craft as “lying somewhere between an art (which relies on talent and technique) and a science (which relies on knowledge).”1 Something about being at the intersection of art and science has always been intoxicating to me.

I have always liked to create things, and appreciated the art of created things. As a senior in high school I took auto, wood, and metal shop at the same time, all year. It wasn’t just that I was just some student in those shop classes either, I was the top student. So even though generally speaking I was a horrible student in high-school, when it came to a class where I could create, I aspired to be a craftsman.

Some people are content with only learning enough to make a cutting board, and doing that over and over. A craftsman isn’t like that though. They are never content with where their skills are in their craft. They want to know how to use every tool in the shop so that they can create anything and everything. If there is something new, they want to know how to leverage it.

A craftsman is someone who equally values knowledge (what wood should I use), continued learning (how can I make a jig to create this piece), and practical application (creating the piece). Lastly, and I think, most importantly, a craftsman is the type of person who takes personal pride in what they create. They’d gladly sign their name on what they create. Nearly two decades later I’m still very proud of the first place winning quilted maple curio cabinet hanging in my house that I made as a high school student. It is probably the only thing I’m particularly proud of from high school in fact.

But I don’t create much using wood, metal, or socket wrenches anymore. My craft of choice now is software. The heart of software is creating things. Software is an amazing place where you can take the science of math and computers and apply it like art in a way to create something that you can use. And not only can you use it, but because of the economics of software you can basically give it away for free to everyone you know, or even don’t know for that matter.

Professionally, I am a product owner. I thrive on figuring out what to create (knowledge) and working with a team to build what previously didn’t exist (application). I’m drawn to other aspiring craftsmen being on my team. It is my opinion that great software is created by craftsmen.

As a hobby I love to code. Outside of work and family it is the number one thing I do, but I don’t think I could ever do it as my day job. Perhaps that’s because at home I can code just to enjoy the craft. I don’t have to worry about the strategy, deadlines, or other constraints that exist in a business. I can just craft code I find beautiful.

Hopefully in twenty years I’ll still be proud of the nuances of some of the code I write. Just like I do now knowing that my curio cabinet has dove-tail joints instead of dados, and book-matched quilted maple instead of a veneer maple plywood.

*It is my intention that the term ‘craftsman’ be considered a gender neutral noun.

1. http://en.wikipedia.org/wiki/Craft

]]>
683
JS 101: Cache your selectors https://pseudosavant.com/blog/2014/01/30/js-101-cache-your-selectors/ Thu, 30 Jan 2014 21:43:07 +0000 https://pseudosavant.com/blog/?p=678 javascript-icon.pngOne of the slowest things you can do with JavaScript is work with the DOM. And one of the slowest DOM operations is performing a query to find DOM elements. Caching those queries can have significant performance impacts. Here’s how you do it.

Here is an example of the type of code you shouldn’t write, but that I have seen many times. It needs to make multiple changes to some element(s) and performs the query selector each time it is needed.

You can ‘cache’ the response of queries you know you’ll use again in a variable however. Then each time you need to operate on those elements you just use the variable your assign them to. If there are queries you would use in multiple places in your app it can be a good idea to cache many queries at the start of your app so that you can reference them later.

Another good common practice is to prefix your cached variables with a `$` (like $ with jQuery) to indicate that it is a query response. I follow this pattern even when I don’t use jQuery as seen in the example above.

Another reason to use cached queries is that you can perform queries on all of the child elements of that element. Suppose you have a form and you will be working with many of the fields in the form to validate and submit the form. In this scenario I will typically query for the form first and then find the child elements of the form. In the example below it means instead of looking at every `input` on the page and checking whether it is a descendent of `.myForm` it will just look at the input fields that are child elements of `.myForm`.

The best way to manage caching of your queries though is to make the computer do it for you automatically. The function(s) below wrap around the native DOM `querySelectorAll` or jQuery and will automatically cache every object you lookup using them. This is actually better than caching your selectors in advance because the client only suffers the performance impact when that query actually gets done.

]]>
678
JS 101: Global Variables https://pseudosavant.com/blog/2014/01/29/javascript-101-global-variables/ https://pseudosavant.com/blog/2014/01/29/javascript-101-global-variables/#respond Wed, 29 Jan 2014 23:14:39 +0000 http://pseudosavant.com/blog/?p=645 javascript-icon.pngUnderstanding global variables and scope is very important in JavaScript. Misunderstanding what a variable’s scope is can lead odd bugs and broken code. This is made more problematic because the default scope in JavaScript is global. But what is global and how should you use it?

In JavaScript all code uses the same ‘global’ scope. The global scope is actually just a reference to the top-most object in the scope chain. On browsers the global object is `window`. Other environments, like node.js, have a different global object though.

Global Variables

Any variable in the top-most global scope is considered a global variable. In the example below `a` is a global variable because it was defined at the top-most scope. Global variables are actually just properties of the global object. Which is why `a === window.a` is true.

Generally speaking it is best to limit the number of global variables you use so that you don’t pollute the global namespace. You can prevent variables from being global by enclosing your code in an immediately invoked function expression (IIFE). Variable declared inside the function will be scoped to just the function.

Variables declared without `var` are implicitly global even if it is declared inside an IIFE. It is considered bad practice to declare variables without `var`, especially since it could be declared that way on accident. Static code analysis tools like JSHint specifically will warn you about it.

Sometimes you’ll want to export some variables to the global namespace however. The example below shows a ‘good’ way to create global variables. I assigned the local variable `y` to a global variable `z` which I can now access anywhere.

Here are some other ways to create global variables:

Global variable in any environment

Export functions to global

]]>
https://pseudosavant.com/blog/2014/01/29/javascript-101-global-variables/feed/ 0 645
Simple Web Project Deployments with BitTorrent Sync https://pseudosavant.com/blog/2013/07/18/simple-web-project-deployments-with-bittorrent-sync/ https://pseudosavant.com/blog/2013/07/18/simple-web-project-deployments-with-bittorrent-sync/#respond Thu, 18 Jul 2013 20:00:04 +0000 http://pseudosavant.com/blog/?p=638 bittorrent-sync-64I have been looking for a good solution for deploying my various web projects lately. It needed to be lightweight, easy to use, allow me to revert back quickly, and didn’t require a lot of server resources. I was leaning toward using Git but ended up using BitTorrent Sync.

You have probably heard of BitTorrent already for their ubiquitous peer-to-peer file sharing protocol and apps. BitTorrent Sync is a new beta product they have that uses a lot of the same core technology but accomplishes a different goal. It syncs folders on multiple devices (computers, tablets, phones, NAS drives, servers) without any central cloud storage service.

Using direct peer-to-peer communications has a several advantages over other cloud services like SkyDrive or Dropbox. It can sync very large files and folders without having to pay for extra cloud storage. It is more secure as transfers are encrypted by the clients and not stored on a remote server where the NSA could request them. Most importantly for my use case, it can be a lot faster.

For my deployments I wanted an easy way to push new folders or files from my development server at my house to my Windows Server 2012 web server ‘in the cloud’. There are a number of ways to do this from simple SFTP/SCP, to Git, or even IIS Web Deploy. Git was looking like a good fit, but to be honest for some small projects it seemed like overkill to have a Git repo setup.

Using BitTorrent Sync I now have a folder on my development machine that is synced with my web server. When I want to push new files I just copy them to the appropriate folder on my development machine. If I want to be able to rollback then I can just make a copy the existing folder(s) locally before I replace them. Since I’m always working with the files locally the changes are really quick, and BitTorrent Sync propagates them very quickly.

My first test was to update three WordPress blogs I maintain. To do that I needed to push 36MB of files to the server. What made this a good test is that the files are small so it was about 3000 files. Typically the low I/O performance of a single-threaded transfer protocol like SFTP or SCP makes an upload like this slow. Uploading those files using SCP took about 10 minutes but BitTorrent Sync did it in less than 2 minutes. Memory usage was also a concern of mine as I have a small VM on the Azure cloud, but most of the time it uses less than 10MB, and I haven’t seen it go higher than 40MB.

I decided to set it up with two-way synchronization, but I could have made it only sync from my PC to the server by setting it up as read-only for the server. A folder can be synced with more than two devices too, so it would be easy to allow someone else access to push files to my server. In fact, I would rather just share a folder with a novice web developer using sync than deal with the hassle and security issues of giving them SSH/SCP/Git credentials.

One quirk with BitTorrent Sync is that it doesn’t yet run as a service, it just runs in the system tray so if you log out (which is common on a server) then the app closes. To get around that until the feature is added I just setup the Windows task scheduler to launch the BTSync.exe app using my credentials when the system boots up. This sounds counter-intuitive but if you set it up this way you must uncheck the ‘Start with Windows’ box in the BitTorrent Sync settings so that it doesn’t try to launch again when you log in.

Best part of all of this is that it was just so quick and easy to setup. It took me less than 30 minutes to set it up on both machines, sync down about 1GB of websites, and update my three blogs. Check it out at BitTorrent Labs.

]]>
https://pseudosavant.com/blog/2013/07/18/simple-web-project-deployments-with-bittorrent-sync/feed/ 0 638
I’m back. https://pseudosavant.com/blog/2010/10/20/im-back/ https://pseudosavant.com/blog/2010/10/20/im-back/#comments Thu, 21 Oct 2010 03:43:40 +0000 http://pseudosavant.com/blog/?p=383 It has been just over two years since my last post. I have had many intentions of writing on my blog but apparently always found something else to do. So what have I been up to?

I started working at DivX as one of the product managers on the consumer software team. It has been a lot of fun and I have been able to take the lead on a lot of interesting projects. My products are the Codec Pack, Converter, and Web Player. The digital video space has been a really interesting place for the last couple of years and we’ve been able to turn some big key threats (Windows 7 and HTML5) into big opportunities through our software (Codec Pack and Web Player, respectively). The Web Player in particular is something that I have spent a lot of time on over the past year. We recently released a beta that introduces two new features that I am particularly proud of.

First, it now supports HTML5 API for <video> which is something I have been following for a long time. I am really glad to see HTML5 finally getting some traction but the one area where things are kind of still a mess it in the <video> space. There isn’t a consistently supported format across the major browsers yet, and some browser have a pretty low quality of playback. We are helping to alleviate this by delivering a HTML5 <video> platform with consistent support for multiple formats (H.264, MP4, MKV, MOV, and DivX), all in very high quality with hardware acceleration (when available), on Windows and Mac for Firefox, Chrome, and  even Internet Explorer. Users just have to have our plugin installed and it will support standard HTML5 <video> markup. Check out a little HTML5 demo I made to see how it works.

Second, we introduced a new feature we are calling DivX HiQ. It allows you to choose to use the DivX Plus Web Player on popular sites like YouTube and Vimeo instead of their default Flash players. As the Web Player is solely focused on video unlike Flash it offers a much better experience with dramatically lower CPU and power consumption. Don’t just take my word on it though see what users are posting about it on our forums.

I have also been doing a bit of programming since I’ve been really getting involved in a lot of web/HTML5 stuff through my work. I made an “Instant” search using Bing’s AJAX API which was novel until Google Instant came out. It is a fun project and it works really well on mobiles. I also created a Google Code project for a JavaScript statistics library (I hope my Purdue professors are proud :) and a Silverlight audio player that supports the HTML5 audio API that I made. I’ll probably blog about them more later.

That is a sampling of what I have been up to in the tech world. I will try to post more regularly (once or twice a month) about what is happening in HTML5, media, and the mobile landscape.

]]>
https://pseudosavant.com/blog/2010/10/20/im-back/feed/ 2 383
The Web 2.0 Has Toll-Booths: Cox, Comcast, and Some Clarity https://pseudosavant.com/blog/2008/06/19/the-web-20-has-toll-booths-cox-comcast-and-some-clarity/ https://pseudosavant.com/blog/2008/06/19/the-web-20-has-toll-booths-cox-comcast-and-some-clarity/#comments Thu, 19 Jun 2008 13:33:57 +0000 http://www.techconsumer.com/?p=926 COX_RES_RGB On a recent call to Cox about a billing issue I was having I stumbled across a very interesting finding: Cox is already implementing data transfer caps. The rep on the phone told me about it, and acted like it was no big deal. Intrigued, I looked into this further and found some interesting insights.

The rep I talked to mentioned the data transfer caps when he was telling me about the difference between a couple of the plans he was talking about. I mentioned that I was surprised they had caps and said what they were. He was surprised I said that and nonchalantly said that everyone does it. I mentioned that it has been big news that Comcast is acknowledging their network management practices including bandwidth caps, and applauded the rep and Cox for being more straight forward about their caps.

After getting off the phone, I went to Cox.com to see what all of the caps were and surprise, surprise, I couldn’t find it. Their Internet service page lists upload and download speeds, the type of IP address you’ll get, whether the plan has “PowerBoost” or not, how much webspace you get, and even how large the e-mail accounts can be, but it doesn’t list caps. I looked all over the site and couldn’t find it anywhere. So I searched Cox for download caps using Live Search a few times and it came up.

Turns out it is <sarcasm>really easy</sarcasm> to find. Just click on the 4pt fontPolicies” link at the bottom of the page, then click on #13 “Bandwidth, Data Storage and Other Limitations” and then in the middle of that paragraph click “Limitations of Service”. Isn’t it so obvious? The Policies page is the only page on Cox.com that actually links to the caps (that I could find). To be fair, once you finally find it, the page is quite clear on what each service plan allows.

For the record, I am not against the idea of consumption caps actually. There are just three major problems with the current implementations I’ve seen in the marketplace.

#1: They are very unclear to consumers. On Cox.com it is buried in a series of pages that only attorneys would be attracted to. The consumption caps need to be shown on the same pages as the bandwidth speeds.

Comcast is even worse than Cox, they don’t even say exactly what the caps are. How much data is 40 million e-mails really? While their examples are a little more understandable to average users, they really need to list the actual cap.

Oddly enough when I used their benchmarks with the averages for my files/emails to calculate their caps, their caps are much higher than Cox’s, so you’d think they wouldn’t be shy about it. Although the difference between the examples is a joke. The effective cap is about 64GB/month using the photo example with my pictures (and I have an 8MP Canon 20D so my pictures are actually quite large), but it is a whopping 4TB (yes, terabytes) if you use the 40 million e-mail example. Talk about unclear.

There also needs to be a way for consumers to check their consumption. There is no place (at least that I could find) where consumers can see how much they are consuming (ala cell phone minutes). Even if you track your own consumption somehow (DD-WRT can do it on a number of routers) the ISPs conveniently don’t recognize anyone’s numbers but their own.

#2: There is a wide disparity between plans (at least at Cox). This is really a byproduct of #1; they don’t make it easy to find what each plan allows.

When I recently signed up for Internet with Cox they tried to sign me up on some combo promotion deal for Cable TV and Internet. It included their Value Internet plan (1.5Mbit/256kbit @ $29.99/month). I opted to upgrade to the Preferred plan (7mbit/512kbit @ $43.99) mostly for the higher uplink for online gaming and VoIP.

It turns out that the Value plan only includes 4GB of downstream and 1GB of upstream traffic per month versus 40GB and 10GB (respectively) for the Preferred. So for 47% more per month I get 1000% more transfer allowance. Who would think that the difference would be so large?

I can easily download 2-3GB in game demos in one day over Xbox Live on a regular basis. I would have blown past my cap in less than a week for sure. I wouldn’t have known the difference until my Internet got cut off or I got a threatening letter. Hence the need for clarity in listing what is included in the Internet packages.

Some examples at Cox are even worse. They have Preferred on a special for $19.99 and Economy (the lowest tier) for $14.99 right now. Economy only includes 3GB of downstream traffic. For an extra $5 you’ll get over 13 times more download capacity. Why can’t this be more obvious?

#3: The caps are ridiculously low. I analyzed how much you could utilize your connection for 24 hours a day, and for an adjusted day of 16 hours (to account for sleep) and here is what I found. I looked at what I call acceptable average utilization (AAU). It is the average bandwidth expressed as a percentage (acceptable speed / rated speed of plan) you can consume without exceeding the bandwidth caps imposed by an ISP.

Every plan allows less than a 2% AAU rate at their rated speeds. On the Value plan (read: not even the lowest tier) you can only average 13kbps! If you account for sleep (not that BitTorrent or my backup software sleeps) then the top adjusted AAU rate of any plan is still only 2.7%.

To put that in perspective, on the higher Preferred plan, streaming music from an online radio at 192kbps constantly everyday would use up your entire consumption cap by itself. If you live with a couple of other people who stream music too, then you can each only do 8 hours per day. In my book that is hardly “excessive usage” for someone paying for the second highest tier plan.

I think I’ll have to check out what DSL and Fiber are offering in my neighborhood to see if I can find a company who agrees.

*Here is a link to my spreadsheet with all of my numbers in more detail.

**Looks like the caps are already causing problems.

]]>
https://pseudosavant.com/blog/2008/06/19/the-web-20-has-toll-booths-cox-comcast-and-some-clarity/feed/ 11 139
Diatribe/Opinion: Internet Video and TV can’t happen with DRM https://pseudosavant.com/blog/2007/12/13/diatribeopinion-internet-video-and-tv-cant-happen-with-drm/ https://pseudosavant.com/blog/2007/12/13/diatribeopinion-internet-video-and-tv-cant-happen-with-drm/#comments Thu, 13 Dec 2007 17:49:28 +0000 http://www.techconsumer.com/2007/12/13/diatribeopinion-internet-video-and-tv-cant-happen-with-drm/ OstrichThis post is in reference to Bob’s post on Internet Video and TV. It started out as a comment, but quickly became too big for that. So here is my $.02. The problem isn’t technological at its heart, it is the content producers and distributors that are at fault, and here’s why.

Just look at what has happened with CableCard, and especially as it affects Vista Media Center. When I upgraded to Vista (and I actually do consider it an upgrade FWIW) one of the main selling points was Media Center, and the integration it offers for my Xbox 360. I have to say Vista Media Center is awesome. By far the best DVR interface I have used, and I love how it works on my Xbox 360. There is one major gaping hole in it though: getting high-quality digital (HD or standard-def) content. It isn’t Microsoft’s fault either, as all the reliable unencrypted sources (NTSC, and OTA ATSC) work great.

Content producers require DRM, and that leads to pretty much all of the technological problems. In fact, all of the technical problems I encountered in my previous posts were DRM related. I won’t use (read: pay for) Vongo or CinemaNow again because it was too much trouble to always troubleshoot the DRM issues, and I’m someone who can actually troubleshoot it, what about regular people like my wife? Here I am, a paying customer who just wants to hand over my money for some entertainment, and the content provider’s arbitrary decision to force DRM is stopping me! I know it makes sense to all of us “regular” people why this is incredibly stupid, but the content people still haven’t gotten it. They should read this paragraph ten times.

So what about getting my content fix through my digital cable subscription? Well, again even though I am a paying customer, that doesn’t really matter. The content providers require encryption, so CableCard came into existence. But CableCard is done by CableLabs, which is basically owned by all of the cable companies, who have their own interests to protect. The net effect? I can’t get digital cable on VMC without buying a new PC (instead of just a USB/PCIe/Firewire add-on) with a special $300 tuner that handles encrypted QAM channels because CableLabs says they have to “certify” the entire setup.

My PC’s TV tuner (AverTV Combo PCIe ~$90) can handle non-encrypted digital cable (unencrypted/clear QAM) without a problem, but that only covers the networks (ABC, CBS, NBC, Fox) because the FCC requires it. To get around that, the cable companies constantly change the channel location on the clear QAM channels so that it continueally messes up the programming guide. Again, I’m a paying customer just wanting to enjoy the entertainment I have paid for and arbitrary technical requirements are stopping me. The only thing stopping me from just dropping my cable and stealing all of the same content is my ethics.So the content business model is relying on the ethics of their customers and DRM? Sounds like a good plan… Honestly, I would love to have someone try to explain that one to me.

So the technology is there for affordable and convenient digital cable to my PC, but it won’t work because they require encryption. The technology is also there to easily consume video from the internet, but you either have to pay for DRM’d junk, or you can steal the video via P2P. Think about how convenient Divx formatted videos on P2P are. They will play on the Xbox 360, Playstation 3, Windows, Linux, and Mac, and even some DVD players. The same holds true for MP3, it plays anywhere because it has no DRM. Using RSS and BitTorrent I could even have my computer automatically download all the shows I want to watch, it just isn’t legal. If there wasn’t the arbitrary technological requirement to have DRM, companies such as Tivo or Netflix would be able to deliver the true mass-market media consumption products that would actually deliver what people want.

In the startup world, it is really common to run into entrepreneurs (or probably wantrepreneurs) who are so worried about giving up equity to partners or investors that their business fails. Essentially they ended up owning 100% of nothing, instead of 10% of something. The content producers are the same as these naive entrepreneurs, and if they don’t change their ways they are going to end up owning 100% of nothing. They will be continuing in that trajectory so long as it is more convenient to consume stolen content than to willingly pay for it. Final note to the content companies: get your head out of the ground and stop worrying about keeping people from copying your content, and start worrying about getting people to pay for it; they are two very different things!

]]>
https://pseudosavant.com/blog/2007/12/13/diatribeopinion-internet-video-and-tv-cant-happen-with-drm/feed/ 1 124
The RIAA is at it again, more settlement letters to students https://pseudosavant.com/blog/2007/09/21/the-riaa-is-at-it-again-more-settlement-letters-to-students/ https://pseudosavant.com/blog/2007/09/21/the-riaa-is-at-it-again-more-settlement-letters-to-students/#comments Fri, 21 Sep 2007 13:09:45 +0000 http://www.techconsumer.com/2007/09/21/the-riaa-is-at-it-again-more-settlement-letters-to-students/ RIAAPurdue University announced this morning that they received 47 new settlement letters from the Recording Industry Mafia Association of America.

Purdue spokesperson Jeanne Norberg said: “As an Internet service provider, Purdue will forward these letters when the user can be accurately identified.” “Purdue will not voluntarily provide names to the RIAA. However, should those notified choose not to pay the settlement, the RIAA may obtain court-ordered subpoenas to obtain the individuals’ names.”

21 subpoenas were issued this summer out of the 37 who received settlement letters last semester. “Purdue [provided] the names of 19 individuals, and subsequently the RIAA reduced its total request for names to 17.”

Am I the only one who is just a little disturbed by the line “…should those notified choose not to pay the settlement?” I do not condone peer-to-peer sharing stealing of music, but I think the record companies’ resources would be better spent working on a new business model that leverages digital music and the Internet instead of suing four-dozen kids in one of their key customer demographics. Hopefully we’ll see some more creativity in music distribution business models such as SpiralFrog, and more consumer-friendly technology advancements like Microsoft’s new watermarking technology in the future.

Full Disclosure: I am a grad student at Purdue. See our previous coverage here.

]]>
https://pseudosavant.com/blog/2007/09/21/the-riaa-is-at-it-again-more-settlement-letters-to-students/feed/ 3 116