PeopleSoft and Star Trek

When I first encountered peoplesoft, I was woefully underwhelmed. Like something a newbie PHP person would have written 10 years prior. In fact, I’ve worked with dozens of similarly functioning sites over the years, so it honestly wasn’t a big deal.

I think I first noticed it was a big deal when I got an email about a year ago from HR, saying they’d give bonuses to people who referred Java and PeopleSoft professionals. That got me to thinking PeopleSoft and Java developers were roughly equivalent. I noticed the positions for them were being posted at roughly the same grade. I didn’t think much of it beyond that.

Recently I figured out what PeopleSoft development actually is. It’s not coding at all. It’s configuration management. Not too far removed from someone who can work in Drupal. I was floored. Not because there’s anything wrong with that, but because such a focus was being put on configuration over custom. I find this both refreshing and scary.

WilWheatonI woke up the other night and couldn’t get back to sleep thinking about this. Was this the future of web development? Configuration management? Was web development eventually going to become 90% “PeopleSoft”? I’ve even been thinking it made sense because Welsey Crusher — while gifted — was not the kind of person I saw coding the woman in red in C. That would require a team of developers. Sure, maybe there’s some awesome libs in the future, but more likely programming in the future will be tantamount to drag and drop. I mean, think about how they controlled the Holodeck, they were giving basic commands, parameters and values. Voice activated PeopleSoft.

Eventually hardware will get better and better, and the bloat of PeopleSoft and like platforms will be less of an inhibitor.

Eventually is a ways away though. Today, speed is paramount. Users demand immediate gratification, and that requires optimization. And PeopleSoft can’t meet user expectations in that way. So it just remains a corporate institution. Something people will put up with because that’s what’s been sold to them.

Scrum vs Kanban

I wanted to learn about Kanban — as I wanted to see what the “alternative” to scrum was. What I found pretty quickly was that they are not equivalent systems.

One big thing I found was that Kanban isn’t a replacement for scrum. It’s actually a very small change management procedure that is supposed to be introduced to an existing process as an added layer, which can be used to help make incremental improvements.

Kanban is super simple. All it is, at its core, is custom workflow steps with limits on how many items can be in each step. It’s a visual way to keep track of where your “stories” are in process and limits to keep too many things from being “in process” or “in testing” at one time. In that way, it’s supposed to help you keep on target, with minimal context switching.

Pro-Kanbaneers state the weaknesses of scrum are

  1. Unclear development steps
  2. Context switching
  3. Partially done work

I’m not convinced “unclear development steps” is a real thing, because they’re mostly saying scrum is confined to 3 or 4 workflow steps (“on deck”, “in process”, “testing”(sometimes), and “done”). I’m pretty sure in scrum you’re “allowed” to have custom workflows, where the development steps become more clear. But there is a real problem that comes up with being bottlenecked in the sprint and having to carry stories over to the next sprint. And context switching is always a problem, but not really avoidable…

There is a perception of scrum that it’s very rigid. I’ve never felt that way, but then, it’s not like a read any books on scrum. I’ve always through it was adaptive — because doesn’t it have to be?

We deal with partially done work by either breaking stories out into smaller stories or breaking out tasks and making sure the tasks that are done are complete and the hours recorded (so that our hour burndown is useful). But maybe my perception of scrum is wrong.

Though the more I read and listen to about agile, the more I feel people are missing the point. They say the word “agile”, but the word loses its meaning if you’re not agile… The whole reason agile exists is to make processes more adaptable, to make people more amenable to change. It’s all about constantly changing and adapting to what is needed.


Posted in Uncategorized. 2 Comments »

Accessibility and Mobility and Web Development

Mobility and Accessibility are not new technologies that have come up. They are not extras that some of us need to tack on to what web developers do.

They are an essential part of web development and should be considered as basic skills.

Accessibility is not hard. It used to be, it really used to be, but with HTML5, it’s actually quite simple. Simply a small part of the HTML5 spec, something web developers should know. Mobility was a hot new topic 6+ years ago. Now it’s expected that the applications you build are in some way available on a mobile platform.

It is an expectation.

Why is this not the reality? Why are web developers still making applications the same way they were 10 years ago? It partly has to do with web development being a “full stack” profession for so long. Traditionally, the front end has been small and the back end has been “the application”. The direction of the web is putting more of the application on the front end, so developers that are considered full stack have to now know more than basic HTML and the minimal JavaScript it takes to do some validation. JavaScript has become huge, for large applications, it requires its own framework.

I don’t think that’s been adequately realized by the web development community yet. The future of the web is a partnership, where the applications are separate. the back end an API, the front end the “actual application” that gets the data it needs from the API. As such, people are turning more to micro frameworks for the back end, and away from what had become the staple Rails clones — and much further away from the even larger, more complex frameworks.

SPA-Architecture I’m not sure if Single Page Applications are truly “the” future. But they are the now. It is important to stay on top of this, as this industry is constantly changing. But one thing is for sure, if you’re a business fully invested in back end developers, you’re hitting a problem where you feel like you need to hire “UI specialists” when you really need either 1) for back end developers to beef up their front end development skills or 2) to hire developers focused on front end “development” (as opposed to graphic designers).

Accessibility, I’m Still Wrong!

A while ago, I talked about the legal requirements for academia web development. I pointed to Section 508 because it was “just another section” of the Rehabilitation Act of 1973, and Section 504 was DEFINITELY required.

I was wrong.

Upon further inspection, we are ONLY under Section 504 because Section 504 is a matter of civil rights, while 508 is “just a guideline”. Section 504 (or title III of the ADA (American’s with disabilities Act of 1990)) is what people reference when filing suit. 508 is not directly enforceable outside of government agencies. Even then it can be trumped by 504.

So when doing things in academia, using 508 is useful in that it will get you 90% of the way there, but that 10% can still get you under 504. The trick is it’s not clearly defined. It’s left very ambiguous. This is probably a good thing from an accessibility standpoint because it can remain technology agnostic.

So stick to WCAG 2.0 AA.

Using jQuery in Node with jsdom

After having watched a ton of Node.js tutorials (and TAing for a JS class), I decided a while ago “for my next script, I’m totally going to use Node.”

So I finally got the opportunity this last week to write a script. Tasked with a menial job, making a script to accomplish it brightened my day.

The first script was dealing with an xml api feed. So I immediately found xml2js, a nice converter and set about looping through some api urls, collecting the data I needed and totaling it up. It was a mess, and looked like this:

var https = require(“https”);
var parseString = require(‘xml2js’).parseString;

https.get(“https://someplace/someapi”, function(response){

var body = ”;
response.on(“data”, function(chunk) {
body += chunk;

response.on(“end”, function(){
parseString(body, function (err, result) {
totalEntries += result.feed.entry.length;
for(var i=0; i < result.feed.entry.length; i++){ something += parseInt(result.feed.entry[i]['something'][0]['somethingelse'][0].$.thingiwant); } console.log("Total stuff: " + something); }); }); } [/sourcecode] This one was easy to get what I needed, but clearly not the right way to do it. Because the functions happen asynchronously, blah blah blah, that's not what I'm writing about. The next one was very similar, but I had to scrape a webpage, not just xml data. So I found a nice lib called jsdom, which created a dom for me to use jquery on. [sourcecode language="javascript"] var jsdom = require("jsdom"); jsdom.env(url, function(errors, window){ var $ = require("jquery")(window); var total = 0; $(".some_class").each(function(key, value){ // just use a regex to get it // it's buried in the onclick, so I'll have to use a regex regardless... var result = value.innerHTML.match(/newWindow\('([^']*)'/)[1]; // get first grouping jsdom.env(host + result, function(errors, window){ var $ = require("jquery")(window); // use regex to get the xxxxxxx because I'm lazy var result = $('head').html().match(/someRegex/g); if(result !== null){ for(var i = 0; i < result.length; i++){ var thing = result[i].match(/"([^"]*)"/)[1]; // get first grouping total += thing; } } }); }); }); [/sourcecode] This was super easy / super powerful to use something I'm already so familiar with to accomplish a task that is well suited to that. The scripts themselves took minutes to write -- if you don't take into account the time I spent finding where to get what I needed.

The Responsibility of Creativity

This article was interesting. Not altogether surprising, but interesting.…

Most of it is pompous fooey designed to make everyone think they’re the creative person in question.

The problem I have with this is here:

A close friend of mine works for a tech startup. She is an intensely creative and intelligent person who falls on the risk-taker side of the spectrum. Though her company initially hired her for her problem-solving skills, she is regularly unable to fix actual problems because nobody will listen to her ideas. “I even say, ‘I’ll do the work. Just give me the go ahead and I’ll do it myself,’ ” she says. “But they won’t, and so the system stays less efficient.”

“I’ll do the work, give me the go ahead and I’ll do it myself.” But they won’t, so she doesn’t. This person is not a risk taker, she is asking the company to be a risk taker, and then resolving herself of responsibility when her idea doesn’t get traction. This doesn’t work.

If you want to be a risk taker, if you want to do things to make things better that are outside of the box, you have to DO them. Don’t ask for permission, just do it. Show your management some small success if you want traction within your company. Don’t cry because they don’t like your ideas. If you’re not willing to go above an beyond, just get back in line and stop complaining.

People DO like creative people, they just don’t want to take the risk you want them to. If you want to be creative or take risks for what you see is a good idea, you have to do it yourself, often on your own time. You have to be the one to make the sacrifice for your own ideas, asking others to do that for you is where we fail.

Posted in Uncategorized. Tags: . No Comments »

The Newbie: How to Set Up SSHFS on Mac OS X

Recently, I wanted to find a simple way of mounting a remote Linux file system from my Macintosh laptop. And by “simple,” I wanted the procedure to consist of mostly downloading and installing a tool, running a command, and not having to delve too deeply into editing configuration files. Fortunately, I was able to figure this out without too much trouble, and thought I would record my experience here. The procedure involves two applications, FUSE For OS X and SSHFS, both of which can be found on the FUSE for OS X web site. FUSE for OS X is a library/framework for Mac OS X developers to establish remote connections; SSHFS is an application built upon the FUSE framework.

First, let’s establish some terminology. We’ll simply refer to the remote server that I wanted to connect to as the “Linux server” (at the domain “remoteserver”) and define my local machine as simply “my laptop.” We’ll call the file directory that I wanted to access on the Linux server as “/webapps”. In essence, I wanted to be able to access the folder “/webapps” on the Linux server as if it were a folder sitting on my laptop.

I’ll also note that I had already set up my SSH keys on my laptop and the Linux server. That needs to be accomplished before anything else. If you need guidance on that, here’s a simple tutorial.

After SSH had been set up:

  1. I downloaded the latest version of FUSE for OS X at the FUSE for OS X web site.
  2. I installed FUSE for OS X on my laptop by double-clicking the disk image, then double-clicking on the installation package. There is pretty standard Mac OS X stuff; it went without a hitch.
  3. I downloaded the latest version of SSHFS for OS X at the FUSE for OS X web site.
  4. I installed SSHFS by double-clicking on the downloaded file. I ran into an issue here where Mac OS X refused to install the package because SSHFS comes from an “unidentified developer.” To get around this, you need to override the Gatekeeper in Mac OS X, which can be as simple as right-clicking on the package and selecting “Open” from the context menu.
  5. Both FUSE for OS X and SSFHS were now installed.
  6. Next, I needed to create a new folder on my laptop which would serve as the mount point. Let’s call that folder “~/mountpoint.”
  7. Now, it was a matter of learning how to invoke the appropriate command to have my laptop mount the Linux server. The command I used was:

sshfs -p 22 username@remoteserver:/webapps/ ~/mountpoint -oauto_cache,reconnect,defer_permissions,noappledouble,negative_vncache,volname=myVolName

Using the above steps, I was able to successfully mount the Linux server. Unmounting is a piece of cake:

umount ~/mountpoint


Additional notes:

The SSHFS command used to mount the remote server is lengthy; indeed, filled to the brim with arguments that I cut and pasted. If you would like to know what each argument does, there is a helpful guide that describes them.

Using SVN with Git

I’ve talked about this before, but I made a pretty picture for it recently to help explain it.

SVN is a centralized repository that we use for controlling deployments, sensitive data, and storing environment specific configs. The majority of the code is contained within Git (github), the development, feature branching workflows etc.

This works well because SVN is good at being central and having a linear history. Git is good at branching and going nuts with workflow.

The Newbie: Learning Tools Interoperability

We’re educational technologists, which generally means two things:

  1. We like to develop tools for teaching and learning;
  2. We have an on-campus Learning Management System (LMS) for which we have often developed;

The above has now been complicated by the inevitable: Our LMS will be changing in the future, and that LMS is, er, unknown at the moment. There are a wide variety of candidates, to be sure: Blackboard, Moodle, Sakai, and Canvas, to name a few. So which one to develop for? Or do we simply stop development, take some time off, and head out on vacation? The latter, alas, isn’t an alternative. And since we don’t know, exactly, what we are writing for, we’re implementing stand-alone web applications at the moment. It’s nice to be doing so, but it would also be nice to easily integrate these applications into whatever LMS the University ultimately decides upon.

Enter Learning Tools Interoperability (LTI), a specification by the IMS Global Learning Consortium. The specification attempts to establish a standard way for rich learning applications to be integrated with other platforms, such as, say, an LMS. In LTI lingo, the “rich learning applications” are called Tools (delivered by a Tool Provider) and the LMS is called the Tool Consumer. The goal is that users of the LMS can connect to your external, web-based application without disrupting their experience by having to travel outside the LMS. For developers, it means “write once, use anywhere.” That seems ideal. But we know how well “write once, use anywhere” often goes.

Nevertheless, we’re starting to explore LTI, and, fortunately, Instructure, the makers of Canvas, have an entire course for developers, and several of the assignments are devoted to learning LTI (just click on the “modules” link of their course site). Additionally, the MSDLT Blog has a good article on writing a basic LTI tool provider, which lists several links that all developers should be aware of, and shares their early thoughts on LTI. And we’ll (hopefully) continue to share our own thoughts on LTI as we delve into it.



Posted in Uncategorized. No Comments »

SVN vs Git

SVN is a minor improvement on the CVS design. A linear, central repository. To their credit, the developers of SVN acknowledge this, from the Red Book:

Subversion was designed to be a successor to CVS, and its originators set out to win the hearts of CVS users in two ways—by creating an open source system with a design (and “look and feel”) similar to CVS, and by attempting to avoid most of CVS’s noticeable flaws. While the result wasn’t—and isn’t—the next great evolution in version control design, Subversion is very powerful, very usable, and very flexible.

Git is a Distributed Version Control System (DVCS). It’s important to distinguish it as “decentralized”. What that means is that each user creates a literal “fork” of the project.

What Git offers that’s important is lightweight branching and significantly better merging. These are what make advanced workflows (feature branches) possible. See Git Flow.

Branching by itself isn’t where the magic is. What makes Git bounds above SVN is actually the merging. What makes the merging so much better is actually how the history of commits is kept. Git is a Directed Acyclic Graph (DAG) and SVN is done linearly.

What this means is the SVN merge doesn’t take into account previous merges of the branch, it merges the files directly. They’re both doing 3-way merges, but the 3rd that is used (the common ancestor) isn’t the same because the linear history of SVN isn’t taking into account previous merges. So basically SVN goes back way further because it doesn’t know where the last merge between branches was. The result is a lot more conflicts that have to be manually resolved.

The above is outdated. As of SVN version 1.5, (2008), SVN includes meta data on merge histories, so it uses the 3-way merge to similar effect as Git now. I’m going to have to write a new one of these after more research.…………