Archive for the ‘software’ Category
Many .Net applications being developed today are leveraging the greatness of dependency injection using some sort of inversion of control-container. So do we at RemoteX when we develop the product called RemoteX Applications. The product has two client applications which roughly adresses the same use cases. One is targeting desktop computers and the other one is targeting Windows Phone (you can read more here and here).
As you can tell by the name, the product consists of several applications (or rather modules). Using frameworks like Prism or Caliburn we can, in code, easily manage each part of the product. And the deployment is taken care of using ClickOnce technology using mage.exe (the MAnifest GEnerator).
But that’s for the desktop client targeting WPF.
So the big question is, how are we going mobile with this?
What regards an inversion of control-container we are “almost there”. We have a home-grown container in place which have been around for a while now, even though it lacks some basic features you would expect an ioc container of year MMX to have.
Speaking of deployment to the Windows Phone you probably know you are kind of locked to using CABinet files. If you are using the tools Microsoft brought to us, you probably also use their Device Setup projects in Visual Studio.
They are good, but you must use Visual Studio to choose the contents of and create/build your CAB file.
What this basically means is that we need to use devenv.exe to build each customer’s customized CAB file.
So up til now we have not had per customer customized CAB files.
All I wanted was ClickOnce technology and a manifest generator for the Windows Phone. So what’s the solution on that?
Say hello to the PowerShell script New-CabWizInf.ps1:
.\New-CabWizInf.ps1 -path .\myapp.inf -appName “My Application” -manufacturer “RemoteX” -fromDirectory .\MyApplication\bin\Release
It works like mage.exe with its -fromDirectory switch and creates the necessary .inf-file (like an Visual Studio Device Setup project would). All needed from that point is to call CABWIZ.exe and Set-AuthenticodeSignature in PowerShell to create and sign the CAB file.
The real power is the -fromDirectory switch which allows us to create custom CAB files on the fly.
So here is a peek of what our setup package scripts now looks like:
Setup Package for Windows using ClickOnce
mage -new deployment -tofile MyApp.application -fromdirectory bin\Release -name “My App” -publisher “RemoteX”
Setup Package for Windows Phones using CAB files
.\New-CabWizInf.ps1 -path MyApp.inf -fromDirectory bin\Release -appName “My App” -manufacturer “RemoteX”
cabwiz MyApp.inf /dest .\
So right now I’m a very happy camper since our packaging tools for Windows AND Windows Phone have equal capabilities which allows us to use dependency injection with dynamic module selection.
Next stop, Prism and Silverlight for the Windows Phone?
Being an enthusiastic software developer is not without its battles. As an metaphor I would like to illustrate this particular (kind of) battle I think of, with some of my own experience as a parent and a bit of a cooking “enthusiast”.
Ok, so you come home late with the kids and you have just had a great time in the park playing around all afternoon. The kids are happy and you feel you have a nice quality moment in life. You also feel quite satisfied with your big dinner plans. Tic-tac-tic-tac. Time flies and you hurry to get home in time for..
Next stop: Dinner.
You: Really enthusiastic about good cooking, nutricious food etc.
Time: Just too late. There is no time to prepare a nice meal of food from the ground up with hungry kids running around, soon screaming and beaten up by each other..
Solution: What was the phone number to that near by pizza/burger/salad/indian/china restaurant again?
If you think of situations like this but with defect software in production, you might have taken one of the following paths what regards the “solution” part:
“- Hey, let’s solve this by increasing the disk space for now so that Windows Error Reporting doesn’t fill up the system volume with error logs”
“- Request timeout? Let’s add another web node so we can continue with our big deployment. Let’s investigate the timeout issue later.”
Depending on your current situation you may right now feel “yeah, what’s wrong with all that?” or “Stop procrastinate! I would ever never do such things – fix the real problem instead”.
Of course you want to start from the ground up and build quality in, but when the shit already have hit the fan, what would YOU do?
I’m not saying that eating fast food or cutting corners in systems development are good things to do all the time. However, there are differences of having a failing/defect system in production and when you are in “experimentation mode” (a.k.a building new features).
For both cases of experimenting/developing with cooking and doing software development, you don’t know for how long you need to experiment to get it “feature completed”. What you DO know is that you for sure want to “get it right”, but again, not necessarily “feature complete”. Therefore you have to do timeboxing to deliver on time without any loss in quality.
To avoid being in the park with the kids too long in the afternoon I can easily add some more automated tests. I can set up my mobile phone to buzz a sound. I could also check the azimuth of the sun, look at the clock tower, use a wrist watch and what not to get a signal when it is time to get started with the dinner.
But even though these tests are valuable and can reveal defects, they are no good if I just ignore to use them.
As I said before, it’s probably not a problem if this happens one or two times. But if you continue doing this you will have a quite dissatified customer (the family).
Or would you? Really?
It certainly depends on how you define what quality time with your family is about.
(yes, believe me when I say my own and my common-law spouse’s definition of “quality” DO differ when the shit already hit the fan in terms of cooking dinner in the weekdays :))
The same goes for computer systems running in production. You and your team may have a higher tolerance (for what some people would call) defects in production.
Maybe you aren’t a “timebox” person and maybe you define quality in other means than the number of automated tests / automated test coverage percentage.
When development tools mature, people changes their minds and/or gets replaced, the definition of describing “good quality” of a particular system may also change alot during the lifetime of the system.
The challange is not just about more automated tests. It’s about developers and managers (a.k.a. people), LOCs, test coverage, tools, time and process.
Just installed CodeSaga on our internal TFS server and I must say I’m impressed!
For a long time I wanted an application like FishEye for our TFS source code repository, but unfortunately, FishEye only works with Perforce, CVS and Subversion…
CodeSaga was quite easy to set up, even though an MSI package would have been nice.
But I guess that is something Torkel might be working on in a near future.
One feature CodeSaga doesn’t have yet is the charts of FishEye. But there is a tab already for it in the application, but the page behind it only displays this message:
“Interactive Silverlight charts coming soon to a theather near you!”
Very nice! I’m looking forward to get them into our installation.. 🙂
Torkel Ödegaard is a consultant at Avega and as far as I understand he created this application as a demo/sample application that shows what you can do with ASP.Net MVC. The application also utilizes some of the nice frameworks I also like very much (Castle Windsor, Rhino Tools) and Boo.
I will definitely look into the code when I get some time for it.
But hey, good for me: I will attend to a talk next tuesday where Torkel is going to speak about ASP.Net MVC. I guess I will ask one or two questions about CodeSaga 🙂
Recently I had the opportunity to work with my friend Micael on a HTTP broadband testing application. This test will be used as a zero-touch diagnose tool to identify network related issues between users and a video conference system server.
The test was produced for the non-profit organization Infinite Family which involves users located both in the USA and South Africa. The video conference system is located in South Africa and the internet connection to South Africa is provided by a satellite link. Only the hop over the satellite link adds a few hundred milliseconds in response time (ICMP Ping), so it can be really poor conditions for a video link!
The test has equal capabilities as many of the other tests out there. What it does is that it tries to calculate a response time, which of course is not ICMP Ping as one might think, the test only does HTTP and AJAX. Instead, the test uses a HTTP HEAD request to get somewhat a measurement of how long it takes to establish a TCP session and send some data over port 80.
The test does not only measure the response time, it also measure upload and download speeds. For these kind of tests a lot more data is sent back and forth to the HTTP server using (AJAX) HTTP requests.
The whole test consists of one HTML page with references to jQuery and jQuery UI.
This test was the first project in which I could try out jQuery for real. I had some hard time get it working properly on my PCs and the kids’ Mac, but now it works in IE, Opera, Firefox and Safari on both Mac OS and Windows.
The progress bar used in the test is a pretty simple thing to do with a few DIV elements and a bit of CSS. That was the approach I used in an early version, but I decided to try out jQuery UI as soon as I jumped on the jQuery track.
You can find the test right here.
Since it was a real hassle to figure out these settings I decided to note them here for future references. Since I didn’t find this variant of the Pidgin+SIPE+LCS configuration anywhere else, it may be useful for others too.
While talking to others using Pidgin + LCS I noticed the configuration may differ depending on your setup, i.e. internal only or external LCS / SIP server.
At work we use an internal only LCS server which is only listening on its default TCP port (5060).
Below you can see that I need to enter domain + username in the auth* fields. I know others that don’t use these fields in their setup, but they do have an externally accessible (over TLS/SSL, port 443) LCS / SIP server.
So be prepared, your result may vary :).
- Download Pidgin (v2.5.5 in my case)
- Download SIPE (v1.3.3, libsipe.dll, precompiled dist for Windows)
- Put libsipe.dll in the plugins directory of Pidgin
- Go to Accounts -> Manage Accounts
- Choose Add..
- Enter the following settings.
Protocol = Microsoft LCS/OCS
Username = <your SIP address in Active Directory is your primary e-mail address>
Password = <your Active Directory password>
Use Proxy = Checked
Proxy Server = <the FQDN of your LCS server, i.e. mylcs.corp.local>
Use non-standard port = Checked
Port = 5060
Connection Type = TCP
Auth User = <your Active Directory username>
Auth Domain = <your Active Directory domain name in dotted form, i.e. corp.local>
- Click Save and you should see your LCS contacts appearing in the Buddy List of Pidgin
I place myself on no side, or well… probably both sides of the SOAP/REST camps. I see, probably because of the fact that I work for an ISV company, benefits of having closed data formats. Not that I don’t like the REST-model, I really do, but I see problems having to expose your “system internals” to the public. Mostly because I as an ISV would be afraid that someone is going to steal some intellectual property that would be revealed with the REST-model.
In the other way around, I believe that it is somewhat risky for an ISV by building a system locked up and closed by implementing your brave new SOA-compatible API with only the SOAP document/literal model.
I believe it’s the same as with the parallell with the TV networks; The networks, that today actively work against broadcasting over the web, won’t survive 5 years from now!
If we broaden our view a bit, to this talk about that “all” software will be open source in 10 years from now; I really don’t think that will happen.
But, I do think however, that “all” software (to be successful and to survive) will be using an open format. Like the REST-model.
The intellectual property that is left is in the solution, the implementation, the code and the service. That’s where the money will be.
The provider that will deliver the best quality, most available, most driven by change and most “open” will be the most successful one.
A backup title for this post could be “ISV: You won’t survive just because you published your XSD/WSDLs!”
Last week I was part of a, quite worthless but amusing, discussion about unique identifiers. It was about why call something universal when it’s not, and why change from GUID to UUID (2).
Question like “- What will change now when we go universal and not just global?” was asked.
In that discussion one could easily sell in the REST concept, IMO. Funny it would be, if both “GRLs” and “URLs” existed on the web. 😉
Last friday (november 11th) I was attending a DDD workshop led by Eric Evans himself. It was of course very fun to meet him in person, since he’s the author of the great book called Domain Driven Design.
The workshop was arranged by a company called Citerus, which I hadn’t heard of before this event.
If you get the chance the attend to one of Eric’s workshops in DDD I really recommend you to do that. (The fact is that I really recommend you to attend to any DDD event that is lined up in this way.)
The cons: Since it seems that the common swede is shy to raise their hands in a class, it would have been better to _not_ line up the tables like the general classroom. I think that prohibits the participants to volunteer and interact with each other.
The bottom line is: Attend to a DDD workshop if you get the chance to and thank you all nice guys at Citerus for letting me to.