The cloud is real, scaling up isn't enough these days. Let us help you move your business into the cloud.
It's all about designing your application to match the requirements. We have years of experience with this.
We really love to write code. We do both server-side and client-side applications, and love to do both!
Is your or your clients' data safe? Let us take a look, we have years of experience hardening web applications.
The application is deployed to Microsoft's Azure cloud, and is designed to scale up and out.
An easy to test server-side framework to handled rendering and serving pages.
A document database for handling complex data and querying geo data.
Asynchronous loading to enrich the content without killing performance.
The application is deployed to Microsoft's Azure cloud, and is designed to scale up and out.
An easy to test server-side framework to handled rendering and serving pages.
Simple data, simple indexes, fast querying, the relational database at it's best.
Asynchronous loading to avoid page loads ruining the user experience.
The application is deployed to Microsoft's Azure cloud, and is designed to scale up and out.
An easy to test server-side framework to handled rendering and serving pages.
Two types of data, means two types of databases for best performance.
Asynchronous loading to enrich the content without killing performance.
The site is a news, resource and community site for people collecting, and maybe even building, scale models (the plastic kits you got as a kid, that you glue together and maybe even paint to make them look like a miniature version of an actual plane/tank/what not).
The data we store for the kits are quite complex, with collections of simple and complex types as part of the kit object.
Of course this could be modelled into a number of tables in a relational database, and actually was before we moved to a document database.
Unfortunately that would be a bad idea. When ever you need to get a kit 'record', you also need all this extra data. That would mean several inner and outer joins on every read.
The more data you have, the bigger the server you need. Even worse, if the database gets really big, there's no way of splitting the data out across servers.
When using a document database, in this case MongoDB, you do not store data as records with set a number of columns, you store documents.
Documents are complex objects that can contain regular simple data like strings and integer, but also collections of simple and complex types.
So when we need to display information about a kit, we fetch all the data needed in one single query.
This of course means that we do have some redundancy, but that is a lot better than having to do several reads!
Being a community site, the solutions has to be ready for lots of users, and more importantly, the interactions of the users.
One of the things the users can do, is register the unbuild kits they have.
Trying to display the kit info fast is a bit hard when we also want to display the users who own that particular kit.
You could argue that the information about which users own a particular kit belongs on the kit object. Especially when we want to display the information on the same page as the kit.
Unfortunately that would mean lots of more or less concurrent updates of the kit object, and would bloat the size of that document.
To avoid this bad use of the database, we store the information in its own collection/table. That way you'll get a lot of inserts into this collection, and a few deletes. Much better!
To not ruin the great performance we have when displaying a kit, we use asynchronous loads to show these additional information.
Now we can let the visitor know if he himself has a particular kit (in case he forgot), and we can let the visitor know who owns the kit, if he needs feedback or help.
If you actually need the extra info, you probably wont mind the less than one second delay you get before it has been loaded.
Not all shops are virtual/internet shops.
When you travel to a new city/country, you probably would like to go visit a shop that sells these hobby items.
So we need to locate shops close to a visitor's current location.
Document databases, at least some (Lucene and MongoDB to mention 2), have been good at storing and querying data based on a location/distance for a long time.
So we store the longitude and latitude of a shop in the database, and when we need to locate shops nearby, we do a simple query with the visitors current longitude and latitude and a maximum distance.
It just works, and the really crazy part, you get the results really fast!
Visit Scale Modelling Central.
The site is the official home of the open source MVC Forum project, a forum solution for ASP.NET MVC projects.
The site is hosted in Microsoft's Azure cloud, mainly to make sure the solution can actually scale up (easy) and out (a bit harder).
We did not want to polute the solution with code needed to handle running in multiple instances. If you do not need the code to handle running in multiple instances, your solution should not include it.
When a visitor posts a new thread or uploads a new attachment, all running instances should continue to serve the same result to requests.
When potentially running in multiple instances, you need to stop use the local storage for anything.
If you store a file locally (in an instance), how are the other instances going to serve that file up in a response to a request? And even worse if you shut down that instance, that file is gone!
The solution is to store all files (including Lucene indexes) in a shared storage area. In the case of Azure, we store everything in Blob storage.
So no need for the instances to talk to each other, we have solved the problem without having to write complex code.
Visit MVC Forum.
The tools 4 testing solution is meant to be a number of tools mainly for developers and tester.
Hopefully these tools will make it easier testing applications, a thing that is getting more and more important as users and clients demand faster and faster delivery cycles.
The first tool implemented is a 'SMTP trap', a fully functional SMTP server that will trap all incoming e-mails (as in not deliver them to the recipient server).
The second tool being implemented is a performance tool.
We can not predict what kind of application our users need to test, so our SMTP trap could be trapping e-mails from a web application sending out e-mails every now and then, or it could be a mail robot sending out thousands of e-mails or probably both at the same time.
This is what scaling out is all about.
Microsoft's Azure cloud can auto-scale. So you can configure the solution to spin up additional instances when ever the average CPU load on the existing instances reaches a given percentage.
And almost as important, it can scale back in, when the average CPU load drops back down.
The SMTP server will recieve lots of e-mails, and at times they will arrive in heaps. We might be recieving them across several instances, and we will be recieving them from several threads.
We need to store all this data somewhere that can handle the amount of data and the speed with which it arrives.
This is what blob storage is made for.
Blob storage is like one huge hard disk drive, with all the bytes you'll ever need (well, maybe not). If you need 10GB, you'll have 10GB and pay for that, if you need 10TB, well, you have that and pay for that.
Blob storage is fast, of course not fast like an actual physical hard disk drive on a physical server, but fast enough, and we can access is concurrently from all our threads and instances.
So we store all the incoming e-mails as files on this huge virtual drive. When we need to show these e-mails through the web application, we just read the bytes and present them in the browser.
Visit Tools 4 Testing.