Quantcast
Channel: Infragistics Community
Viewing all 2223 articles
Browse latest View live

How to Implement the Repository Pattern in ASP.NET MVC Application

$
0
0

The Repository Pattern is one of the most popular patterns to create an enterprise level application. It restricts us to work directly with the data in the application and creates new layers for database operations, business logic and the application’s UI. If an application does not follow the Repository Pattern, it may have the following problems:

  • Duplicate database operations codes
  • Need of UI to unit test database operations and business logic
  • Need of External dependencies to unit test business logic
  • Difficult to implement database caching, etc.

Using the Repository Pattern has many advantages:

  • Your business logic can be unit tested without data access logic;
  • The database access code can be reused;
  • Your database access code is centrally managed so easy to implement any database access policies, like caching;
  • It’s easy to implement domain logics;
  • Your domain entities or business entities are strongly typed with annotations; and more.

On the internet, there are millions of articles written around Repository Pattern, but in this one we’re going to focus on how to implement it in an ASP.NET MVC Application. So let’s get started!

Project Structure

Let us start with creating the Project structure for the application. We are going to create four projects:

  1. Core Project
  2. Infrastructure Project
  3. Test Project
  4. MVC Project

Each project has its own purpose. You can probably guess by the projects’ names what they’ll contain: Core and Infrastructure projects are Class Libraries, Web project is a MVC project, and Test project is a Unit Test project. Eventually, the projects in the solution explorer will look as shown in the image below:

As we progress in this post, we will learn in detail about the purpose of each project, however, to start we can summarize the main objective of each project as the following:

So far our understanding for different projects is clear. Now let us go ahead and implement each project one by one. During the implementations, we will explore the responsibilities of each project in detail.

 

Core Project

In the core project, we keep the entities and the repository interfaces or the database operation interfaces. The core project contains information about the domain entities and the database operations required on the domain entities. In an ideal scenario, the core project should not have any dependencies on external libraries. It must not have any business logic, database operation codes etc.

In short, the core project should contain:

  • Domain entities
  • Repository interfaces or database operations interfaces on domain entities
  • Domain specific data annotations

The core project can NOT contain:

  • Any external libraries for database operations
  • Business logic
  • Database operations code

While creating the domain entities, we also need to make a decision on the restrictions on the domain entities properties, for example:

  • Whether a particular property is required or not. For instance, for a Product entity, the name of the product should be required property.
  • Whether a value of a particular property is in given range or not. For instance, for a Product entity, the price property should be in given range.
  • Whether the maximum length of a particular property should not be given value. For instance, for a Product entity, the name property value should be less than the maximum length.

There could be many such data annotations on the domain entities properties. There are two ways we can think about these data annotations:

  1. As part of the domain entities
  2. As part of the database operations logic

It is purely up to us how we see data annotations. If we consider them part of database operation then we can apply restrictions using database operation libraries API. We are going to use the Entity Framework for database operations in the Infrastructure project, so we can use Entity Framework Fluent API to annotate data.

If we consider them part of domain, then we can use System.ComponentModel.DataAnnotations library to annotate the data. To use this, right click on the Core project’s Reference folder and click on Add Reference. From the Framework tab, select System.ComponentModel.DataAnnotations and add to the project.

We are creating a ProductApp, so let us start with creating the Product entity. To add an entity class, right click on the Core project and add a class, then name the class Product.

usingSystem.ComponentModel.DataAnnotations;namespaceProductApp.Core
{publicclassProduct
    {publicint Id { get; set; } [Required] [MaxLength(100)]publicstring Name { get; set; } [Required]publicdouble Price { get; set; }publicbool inStock { get; set; }
    }
}

We have annotated the Product entity properties with Required and MaxLength. Both of these annotations are part of System.ComponentModel.DataAnnotations. Here, we have considered restriction as part of the domain, hence used data annotations in the core project itself.

We have created Product Entity class and also applied data annotation to that. Now let us go ahead and create Repository interface. But before we create that, let us understand, what is a Repository Interface?

The repository interface defines all the database operations possible on the domain entities. All database operations that can be performed on the domain entities are part of the domain information, hence we will put the repository interface in the core project. How these operations can be performed will be the part of Infrastructure project.

To create a Repository Interface, right click on the Core project and add a folder named Interfaces. Once the Interfaces folder is created, right click on the Interface folder and select add a new item, then from the Code tab select Interface. Name the Interface IProductRepository

usingSystem.Collections.Generic;namespaceProductApp.Core.Interfaces
{publicinterface IProductRepository
    {voidAdd(Product p);voidEdit(Product p);voidRemove(int Id);
        IEnumerable GetProducts(); Product FindById(int Id); } } 

Now we have created a Product entity class and a Product Repository Interface. At this point, the core project should look like this:

Let us go ahead and build the core project to verify everything is in place and move ahead to create Infrastructure project.

 

Infrastructure Project

Main purpose of Infrastructure project is to perform database operations. Besides database operations, it can also consume web services, perform IO operations etc. So mainly, Infrastructure project may perform the following operations:

  • Database operations
  • Working with WCF and Web Services
  • IO operations

We can use any database technology to perform database operations. In this post we are going to use Entity Framework. So we are going to create database using the Code First approach. In the Code First approach, database gets created on basis of the classes. Here database will be created on the basis of the Domain entities from the Core Project.

To create the database from the Core project domain entity, we need to perform these tasks:

  1. Create DataContext class
  2. Configure the connection string
  3. Create DataBase Initalizer class to seed data in the database
  4. Implement IProductRepsitory interface

 

Adding References

First let’s add references of the Entity Framework and ProductApp.Core project. To add the Entity Framework, right click on the Infrastructure project and click on Manage Nuget Package. In the Package Manager Window, search for Entity Framework and install the latest stable version.

To add a reference of the ProductApp.Core project, right click on the Infrastructure project and click on Add Reference. In the Reference Window, click on the Project tab and select ProductApp.Core.

DataContext class

The objective of the DataContext class is to create the DataBase in the Entity Framework Code First approach. We pass a connection string in the constructor of DataContext class. By reading the connection string, the Entity Framework create the database. If a connection string is not specified then the Entity Framework creates the database in a local database server.

In the DataContext class:

  • Create a DbSet type property. This is responsible for creating the table for the Product entity
  • In the constructor of the DataContext class, pass the connection string to specify information to create database, for example server name, database name, login information etc. We need to pass name of the connection string. name where database would be created
  • If connection string is not passed, Entity Framework creates with the name of data context class in the local database server.
  • ProductDataContext class inherits the DbContext class

 The ProductDataContext class can be created as shown in the listing below:

usingProductApp.Core;usingSystem.Data.Entity;namespaceProductApp.Infrastructure
{publicclassProductContext  : DbContext
    {publicProductContext()
           : base("name=ProductAppConnectionString")
       {
       }public DbSet Products { get; set; } } } 

Next we need to work on the Connection String. As discussed earlier, we can either pass the connection string to specify database creation information or reply on the Entity Framework to create default database at default location for us. We are going to specify the connection string that is why, we passed a connection string name ProductAppConnectionString in the constructor of ProductDataContext class. In the App.Config file the ProductAppConnectionString connection string can be created as shown in the listing below:

<addname="ProductAppConnectionString"connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=ProductAppJan;Integrated Security=True;MultipleActiveResultSets=true"providerName="System.Data.SqlClient"/>

Database Initializer class

We create a database initializer class to seed the database with some initial value at time of the creation. To create the Database initializer class, create a class which inherits from DropCreateDatabaseIfModelChnages. There are other options of classes available to inherit in order to create a database initializer class. If we inherit DropCreateDatabaseIfModelChnages class then each time a new database will be created on the model changes. So for example, if we add or remove properties from the Product entity class, Entity Framework will drop the existing database and create a new one. Of course this is not a great option, since data will be lost too, so I recommend you explore other options to inherit the database initializer class.

The database initializer class can be created as shown in the listing below. Here we are seeding the product table with two rows. To seed the data:

  1. Override Seed method
  2. Add product to Context.Products
  3. Call Context.SaveChanges()
usingProductApp.Core;usingSystem.Data.Entity;namespaceProductApp.Infrastructure
{publicclassProductInitalizeDB : DropCreateDatabaseIfModelChanges { protectedoverridevoidSeed(ProductContext context) { context.Products.Add(new Product { Id = 1, Name = "Rice", inStock = true, Price = 30 }); context.Products.Add(new Product { Id = 2, Name = "Sugar", inStock = false, Price = 40 }); context.SaveChanges(); base.Seed(context); } } } 

So far, we have done all the Entity Framework Code First related work to create the database. Now let’s go ahead and implement IProductRepository interface from the Core project in a concrete ProductRepository class.

 

Repository Class

This is the class which will perform database operations on the Product Entity. In this class, we will implement the IProductRepository interface from the Core project. Let us start with adding a class ProductRepository to the Infrastructure project and implement IProductRepository interface. To perform database operations, we are going to write simple LINQ to Entity queries. ProductRepositry class can be created as shown in the listing below:

usingProductApp.Core.Interfaces;usingSystem.Collections.Generic;usingSystem.Linq;usingProductApp.Core;namespaceProductApp.Infrastructure
{publicclassProductRepository : IProductRepository
    {
        ProductContext context = new ProductContext();publicvoidAdd(Product p)
        {
            context.Products.Add(p);
            context.SaveChanges();
        }publicvoidEdit(Product p)
        {
            context.Entry(p).State = System.Data.Entity.EntityState.Modified;
        }public Product FindById(int Id)
        {var result = (from r in context.Products where r.Id == Id select r).FirstOrDefault();return result;
        }public IEnumerable GetProducts() { return context.Products; } publicvoidRemove(int Id) { Product p = context.Products.Find(Id); context.Products.Remove(p); context.SaveChanges(); } } } 

So far we have created a Data Context class, a Database Initializer class, and the Repository class. Let us build the infrastructure project to make sure that everything is in place. The ProductApp.Infrastructure project will look as given in the below image:

 

Now we’re done creating the Infrastructure project. We have written all the database operations-related classes inside the Infrastructure project, and all the database-related logic is in a central place. Whenever any changes in database logic is required, we need to change only the infrastructure project.

 

Test Project

The biggest advantage of Repository Pattern is the testability. This allows us to unit test the various components without having dependencies on other components of the project. For example, we have created the Repository class which performs the database operations to verify correctness of the functionality, so we should unit test it. We should also be able to write tests for the Repository class without any dependency on the web project or UI. Since we are following the Repository Pattern, we can write Unit Tests for the Infrastructure project without any dependency on the MVC project (UI).

To write Unit Tests for ProductRepository class, let us add following references in the Test project.

  1. Reference of ProductApp.Core project
  2. Reference of ProductApp.Infrastructure project
  3. Entity Framework package

 

To add the Entity Framework, right click on the Test project and click on Manage Nuget Package. In the Package Manger Windows, search for Entity Framework and install the latest stable version.

To add a reference of the ProductApp.Core project, right click on the Test project and click on Add Reference. In the Reference Window, click on Project tab and select ProductApp.Core.

To add a reference of the ProductApp.Infrastructure project, right click on the Test project and click on Add Reference. In the Reference Window, click on Project tab and select ProductApp.Infrastructure.

Copy the Connection String

Visual Studio always reads the config file of the running project. To test the Infrastructure project, we will run the Test project. Hence the connection string should be part of the App.Config of the Test project. Let us copy and paste the connection string from Infrastructure project in the Test project.

We have added all the required references and copied the connection string. Let’s go ahead now and set up the Test Class. We’ll create a Test Class with the name ProductRepositoryTest. Test Initialize is the function executed before the tests are executed. We need to create instance of the ProductRepository class and call the ProductDbInitalize class to seed the data before we run tests. Test Initializer can be written as shown in the listing below:

[TestClass]publicclassProductRepositoryTest
    {
        ProductRepository Repo;  [TestInitialize]publicvoidTestSetup()
        {
            ProductInitalizeDB db = new ProductInitalizeDB();
            System.Data.Entity.Database.SetInitializer(db);
            Repo = new ProductRepository();
        }
    }

Now we’ve written the Test Initializer. Now let write the very first test to verify whether ProductInitalizeDB class seeds two rows in the Product table or not. Since it is the first test we will execute, it will also verify whether the database gets created or not. So essentially we are writing a test:

  1. To verify database creation
  2. To verify number of rows inserted by the seed method of Product Database Initializer
[TestMethod]publicvoidIsRepositoryInitalizeWithValidNumberOfData()
        {var result = Repo.GetProducts();
            Assert.IsNotNull(result);var numberOfRecords = result.ToList().Count;
            Assert.AreEqual(2, numberOfRecords);
        }

As you can see, we’re calling the Repository GetProducts() function to fetch all the Products inserted while creating the database. This test is actually verifying whether GetProducts() works as expected or not, and also verifying database creation. In the Test Explorer window, we can run the test for verification.

To run the test, first build the Test project, then from the top menu select Test->Windows-Test Explorer. In the Test Explorer, we will find all the tests listed. Select the test and click on Run.

Let’s go ahead and write one more test to verify Add Product operation on the Repository:

 [TestMethod]publicvoidIsRepositoryAddsProduct()
        {
            Product productToInsert = new Product
            {
                Id = 3,
                inStock = true,
                Name = "Salt",
                Price = 17

            };
            Repo.Add(productToInsert);
            // If Product inserts successfully, //number of records will increase to 3 var result = Repo.GetProducts();var numberOfRecords = result.ToList().Count;
            Assert.AreEqual(3, numberOfRecords);
        }

To verify insertion of the Product, we are calling the Add function on the Repository. If Product gets added successfully, the number of records will increase to 3 from 2 and we are verifying that. On running the test, we will find that the test has been passed.

In this way, we can write tests for all the Database operations from the Product Repository class. Now we are sure that we have implemented the Repository class correctly because tests are passing, which means the Infrastructure and Core project can be used with any UI (in this case MVC) project.

 

MVC or Web Project

Finally we have gotten to the MVC project! Like the Test project, we need to add following references

  1. Reference of ProductApp.Core project
  2. Reference of ProductApp.Infrastructure project

To add a reference of the ProductApp.Core project, right click on the MVC project and click on Add Reference. In the Reference Window, click on Project tab and select ProductApp.Core.

To add a reference of the ProductApp.Infrastructure project, right click on the MVC project and click on Add Reference. In the Reference Window, click on Project tab and select ProductApp.Infrastructure.

 

Copy the Connection String

Visual Studio always reads the config file of the running project. To test the Infrastructure project, we will run the Test project, so the connection string should be part of the App.Config of the Test project. To make it easier, let’s copy and paste the connection string from Infrastructure project in the Test project.

 

Scaffolding the Application

We should have everything in place to scaffold the MVC controller. To scaffold, right click on the Controller folder and select MVC 5 Controller with Views, using Entity Framework as shown in the image below:

Next we will see the Add Controller window. Here we need to provide the Model Class and Data context class information. In our project, model class is the Product class from the Core project and the Data context class is the ProductDataContext class from the Infrastructure project. Let us select both the classes from the dropdown as shown in the image below:

Also we should make sure that the Generate Views, Reference script libraries, and Use a layout page options are selected.

On clicking Add, Visual Studio will create the ProductsController and Views inside Views/Products folder. The MVC project should have structure as shown in the image below:

At this point if we go ahead and run the application, we will be able to perform CRUD operations on the Product entity.

Problem with Scaffolding

But we are not done yet! Let’s open the ProductsController class and examine the code. On the very first line, we will find the problem. Since we have used MVC scaffolding, MVC is creating an object of the ProductContext class to perform the database operations.

Any dependencies on the context class binds the UI project and the Database tightly to each other. As we know the Datacontext class is an Entity Framework component. We do not want the MVC project to know which database technology is being used in the Infrastructure project. On the other hand, we haven’t tested the Datacontext class; we’ve tested the ProductRepository class. Ideally we should use ProductRepository class instead of the ProductContext class to perform database operations in the MVC Controller.  To summarize,

  1. MVC Scaffolding uses Data context class to perform database operations. The data context class is an Entity Framework component, so its uses tightly couples UI (MVC) with the Database (EF) technology.
  2. The data context class is not unit tested so it’s not a good idea to use that.
  3. We have a tested ProductRepository class. We should use this inside Controller to perform database operations. Also, the ProductRepository class does not expose database technology to the UI.

To use the ProductRepository class for database operations, we need to refactor the ProductsController class. To do so, there are two steps we need to follow:

  1. Create an object of ProductRepository class instead of ProductContext class.
  2. Call methods of ProductRepository class to perform database operations on Product entity instead of methods of ProductContext class.

In the listing below, I have commented codes using ProductContext and called ProductRepository methods. After refactoring, the ProductController class will look like the following:

usingSystem;usingSystem.Net;usingSystem.Web.Mvc;usingProductApp.Core;usingProductApp.Infrastructure;namespaceProductApp.Web.Controllers
{publicclassProductsController : Controller
    {//private ProductContext db = new ProductContext();private ProductRepository db = new ProductRepository();public ActionResult Index()
        {//return View(db.Products.ToList());returnView(db.GetProducts());
        }public ActionResult Details(int? id)
        {if (id == null)
            {returnnewHttpStatusCodeResult(HttpStatusCode.BadRequest);
            }// Product product = db.Products.Find(id);
            Product product = db.FindById(Convert.ToInt32(id));if (product == null)
            {returnHttpNotFound();
            }returnView(product);
        }public ActionResult Create()
        {returnView();
        } [HttpPost] [ValidateAntiForgeryToken]public ActionResult Create([Bind(Include = "Id,Name,Price,inStock")] Product product)
        {if (ModelState.IsValid)
            {// db.Products.Add(product);//db.SaveChanges();
                db.Add(product);returnRedirectToAction("Index");
            }returnView(product);
        }public ActionResult Edit(int? id)
        {if (id == null)
            {returnnewHttpStatusCodeResult(HttpStatusCode.BadRequest);
            }
            Product product = db.FindById(Convert.ToInt32(id));if (product == null)
            {returnHttpNotFound();
            }returnView(product);
        } [HttpPost] [ValidateAntiForgeryToken]public ActionResult Edit([Bind(Include = "Id,Name,Price,inStock")] Product product)
        {if (ModelState.IsValid)
            {//db.Entry(product).State = EntityState.Modified;//db.SaveChanges();
                db.Edit(product);returnRedirectToAction("Index");
            }returnView(product);
        }public ActionResult Delete(int? id)
        {if (id == null)
            {returnnewHttpStatusCodeResult(HttpStatusCode.BadRequest);
            }
            Product product = db.FindById(Convert.ToInt32(id));if (product == null)
            {returnHttpNotFound();
            }returnView(product);
        } [HttpPost, ActionName("Delete")] [ValidateAntiForgeryToken]public ActionResult DeleteConfirmed(int id)
        {//Product product = db.FindById(Convert.ToInt32(id));// db.Products.Remove(product);// db.SaveChanges();
            db.Remove(id);returnRedirectToAction("Index");
        }protectedoverridevoidDispose(bool disposing)
        {if (disposing)
            {//db.Dispose();
            }base.Dispose(disposing);
        }
    }
}

After refactoring, let’s go ahead and build and run the application – we should be able to do so and perform the CRUD operations.

Injecting the Dependency

Now we’re happy that the application is up and running, and it was created using the Repository pattern. But still there is a problem: we are directly creating an object of the ProductRepository class inside the ProductsController class, and we don’t want this. We want to invert the dependency and delegate the task of injecting the dependency to a third party, popularly known as a DI container. Essentially, ProductsController will ask the DI container to return the instance of IProductRepository.

There are many DI containers available for MVC applications. In this example we’ll use the simplest Unity DI container. To do so, right click on the MVC project and click Manage Nuget Package. In the Nuget Package Manager search for Unity.Mvc and install the package.

Once the Unity.Mvc package is installed, let us go ahead and open App_Start folder. Inside the App_Start folder, we will find the UnityConfig.cs file. In the UnityConfig class, we havr to register the type. To do so, open RegisterTypes function in UnityConfig class and register the type as shown in the listing below:

publicstaticvoidRegisterTypes(IUnityContainer container)
        {// TODO: Register your types here
            container.RegisterType(); } 

We have registered the type to Unity DI container. Now let us go ahead and do a little bit of refactoring in the ProductsController class.  In the constructor of ProductsController we will pass the reference of the repository interface. Whenever required by the application, the Unity DI container will inject the concrete object of ProductRepository in the application by resolving the type. We need to refactor the ProductsController as shown in the listing below:

publicclassProductsController : Controller
    {
        IProductRepository db;publicProductsController(IProductRepository db)
        {this.db = db;
        }

Let us go ahead and build and run the application. We should have the application up and running, and we should able to perform CRUD operations using Repository Pattern and Dependency Injection!

Conclusion

In this article, we learned in a step by step manner how to create an MVC application following the Repository pattern. In doing so, we can put all the database logic in one place and whenever required, we only need to change the repository and test that. The Repository Pattern also loosely couples the application UI with the Database logic and the Domain entities and makes your application more testable.

I hope you found this post useful, thanks for reading!


How to use GitHub like a Pro

$
0
0

Let’s begin today’s post with a fact: As of right now (at the time of writing), GitHub has 21 million repositories and 9 million users - which isn’t too bad at all! For developers, GitHub offers an enormous range of tools, Wikis and information, so we want to help show you how to use it like a pro.

Before understanding what GitHub is, the first thing to do is understand what Git is. Git is an open source version control developed by the founder of the Linux, Linus Trovalds. Like any other version control, Git manages and stores the different versions of a project.

GitHub is developed over the Git version control so that it brings your project on the web for social networking and shares your code with other developers, inviting them to extend or improve your code. It provides collaboration features like wikis, project information, version releases, contributors to the repository, open and closed issues and more. All this allows developers to easily download the new version of an application, make changes and upload it back to the repository.

Git is a command based tool to perform version controlling for the source code. It also provides a graphical user interface which lets you contribute to projects. You can download the desktop version of GitHub here.

Getting started with GitHub

Repository– Repository is a directory or storage space where all your project related files are stored. This can include source code files, user documentation or installation documentation; everything can be stored in the repository. Each project will have its own repository along with a unique URL on GitHub. You can create either a Private repository - which is free and open source - or a Public repository, which is a paid version.

To create a repository, Go to the Repository tab and click on ‘Create new repository’, where you can then fill in details like the Repository name and description.

Forking– The process of Forking a repository lets you create a copy of an original project as a repository in your GitHub account, allowing you to work locally on the source code. You can make changes to the source code (such as fixing bugs) and commit these to your repository.

In order to fork a repository, you can navigate to the repository URL and click on ‘fork the repository’ to create a repository in your local account.

Commit– Commit is when an individual makes changes to source code. Every time you save code it creates a unique ID to identify what changes to the files were submitted for the particular commit. It also takes up the title and description of the commit to specify what changes were made and what this commit signifies.

Once you have forked a repository, you can make changes in the file. Let’s make changes to the readme.md file. I have updated the readme.md file by adding a 2nd line as shown below.

To update this file, go to the ‘commit changes’ section located at the bottom of the file and update the title of the commit as well as the description:

Upon clicking the 'Propose File Change' changes will be committed in the new branch.

Pull Request - Once you are done with making changes, you can submit a ‘Pull Request’ to the original project owner. If your fix or changes are approved after testing, he/she can pull your changes in the original project. All Pull Requests are managed from the self-titled tab which shows every Pull Request submitted by each contributor. It will compare the updated source code with the original source code, and will provide the list of files that are changed and committed. The owner of the project will have a comprehensive view of all the updated changes in each file, along with the comparison view.

Once you’re done committing changes in the new branch, you’ll be taken to a Pull Request screen to generate the pull request for your new file changes.

Click on the Create ‘Pull Request’ button. A Pull Request will be created and will be visible to the project owner under the pull request section.

Merging Pull Request– Once the changes in the Pull Request are reviewed and approved, now is the time to merge the changes in the original source code. In order to merge the request into the original repository, you need to have a push access on the repository.

The project owner can select the submitted Pull Request and review any changes from the ‘Files Changed’ tab. Clicking on each file will show you what has been changed, added or deleted.

Once the project owner verifies and approves the changes, the new project is ready to merge into the original project.

Click on ‘Merge pull request’ to merge it into the original branch.

GitHub Visual Studio Extension

GitHub is a powerful open source version control, and provides many capabilities through both Git console and the GitHub Desktop Version. But, like every source control, it requires direct integration into your Developer tool. Fortunately, Microsoft and GitHub recently announced the availability of GitHub enterprise in Azure, and they have also launched the GitHub Visual Studio Extension, allowing developers to easily connect and work with GitHub Projects directly from Visual Studio. Team Explorer support for Git allows you to do commits, branching and conflict resolution. You can download the extension here.

Why GitHub?

GitHub is not just another version control to keep track of changes. Instead it is a distributed version control which allows users to share code with developers across the globe.

Other key benefits of GitHub include:

 

  • Support from the Open Source Community
  • A distributed Version Control system
  • Social Networking features like Wikis
  • Manage code with multiple options
  • Show off! If you’ve created something fancy, GitHub provides the easiest way to share it with the Open Source Community.

GitHub also offers social networking capabilities to help you increase your network so that your code can be updated by various developers. As the saying goes, many hands make light work, and for that reason alone you should be using it.

Create modern Web apps for any scenario with your favorite frameworks. Download Ignite UI today and experience the power of Infragistics jQuery controls.

 

 

Developer Quotes: Edition 5

$
0
0

Check out these two pearls of wisdom. Tweet 'em, like 'em, tumble 'em... hey, even print them out and tape them to your coworker's monitor! Whatever works for you! These have just the right amount of sass, so I know you're gonna love 'em.

Share With The Code Below!

<a http://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/7536.Jeff-Sickel.jpg"/> </a><br /><br /><br />Developer Quotes - Sickel <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>


Share With The Code Below!

<a http://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/6518.language.jpg"/> </a><br /><br /><br />Developer Quotes - Language <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>

Keming, or the importance of not being a click.

$
0
0

This is Important

Let me start with some namedropping, to prove the importance of the subject. Apple just changed the kerning of the word “click“ on the El Capitan website (and tens of articles were published instantly on the topic), because the ‘c’ and ‘l’ were almost stuck together, forming a lovely ‘d’. And no marketing or PR department wants to deal with the consequences.

Subtle, but important kerning decision.

A small step for Apple, but a huge leap for kerning.

 

Get Comfortable with the Terms

KERNING, in short, is the distance between two letters in a word.

KEMING is a fake term to describe what happens when you don’t kern properly. Most often, two adjacent letters are placed so close to one another that they combine visually to form a third letter, sometimes making you sound like a click.

Keming as a term was first coined in 2008

Keming. n. The result of improper kerning. The term was first coined in 2008 by David Friedman.

 

Where Do We See Keming?

It’s everywhere - on websites, packaging, custom lettering, posters, menus, signs.

Most of the times it's just a small illegibility issue that bothers a handful of designers.

ST OP, in the name of love.

Often times, though, keming can cause confusion and mockery for your brand.

Spam Restaurant?img source

 

Or it can be really, really bad.

Again with this click...

 

Bonus: Diesel being Diesel, is being brave by breaking all the rules:

Diesel - Only the brave. image source

Now that we’ve seen that even Apple can make this mistake, here are some other beautiful examples we should watch out for:

  • FINAL
  • FLICK
  • pom
  • burn
  • pen is
  • therapist

Keming can produce bad results both when sticking the letters together(ex. burn = bum) and when making gaps in the middle of the word, separating it in two meaningful parts (ex. therapist = the rapist). Check out some more “fun“ images here and here.

Here are some kerning tips to improve the look and feel of your design and not frustrate marketing and PR.

 

How to Avoid Keming:

1. Use trusted fonts. Every font has built-in kerning, and the free, cheap, poorly designed fonts can play expensive tricks on you.

2. Whenever you have to use auto kerning, double and triple check the end result

3. For print. Make sure that when you send print files over to a printing house, all your fonts are in curves. Always.

4. For print. Adobe’s Creative Cloud has the “optical“ option for kerning letters that automatically "makes typography great again“. Learn more about kerning in Illustrator

5. For web. Use either automatic kerning with CSS, or go the extra mile and use lettering.js, kerning.js or a similar library

6. Check again. Read the whole text. Give it to as many people as possible to proofread.

 

Fun and Games

How keen is your eye when it comes to kerning?

Play "Kern Me"

And lastly

Show me you give a FLICK about kerning and share your thoughts & some examples in the comments section.

Responsive images on the web

$
0
0

According to http archive, the top 100 websites on the Internet today look something like this:

That’s right, exactly two thirds of websites in November 2015 are dominated by images. Website sizes are getting ever more bloated, and this can lead to slow load times and a poor UX, especially on mobile devices.

However, as we all know, a ‘picture paints a thousand words’ and clients are very keen on having image rich pages on their websites, regardless of the device used to view them. They understand that customers won’t read every word of text on their website, but images can create a far bigger impact in terms of perception and understanding of their brand. Naturally, they want those images to look great, run fast and fit to the parameters of any device.

So, images are getting ever more popular, yet they can damage User Experience. Responsive images are the solution here, and can play a big role in overcoming slow load times. However, as useful as they are, responsive images aren’t the easiest thing to implement. There are numerous methods of deploying them, yet each has its own limits and drawbacks.

There’s been a lot said about how to implement responsive images (see detailed guides here, here and here).  Today we’re going to overview responsive web design more generally, and think about the kinds of questions you need to ask when selecting your method.

The basic approach

The basic approach to making an image responsive is to set its maximum width at 100%. This means your image will always fit to the container around it and won’t ever exceed it either. This works for basic pages, yet if you have a number of different elements working together, you’ll soon run into problems. Other factors to take into account include:

  • Performance and bandwidth

The principal drawback of the ‘basic’ solution is that every device receives the same image. This is OK if your site is populated by small logos or images. However, if you’re using big pictures, sending these to devices which are connecting over limited mobile data connections will hold up load times considerably.

  • Art Direction

A second drawback with the ‘basic’ approach is that while images will fit on any device, they may lose their impact or power on each. The city vista or mountain view that looks great on a landscape desktop screen may look muddled or out of place on a smartphone. The message it’s trying to convey might be lost; you may prefer the mobile screen version to ‘zoom in’ on a certain aspect of the website image.

So, how can you overcome these issues with the basic approach?

Questions to ask

Before delving into solutions, you first need to decide which issue you want to solve. There are a range of solutions to the responsive image problem that have been proposed. However, each has its own specific strengths and limits. Some will help with certain issues; others will be stronger on others. It’s most important here to understand what your client is looking for; this should inform the solution you choose.

  • Is the problem one with art direction?

  • Does the client have a huge website where they want every image to become responsive?

  • Should all images load, or should they load dynamically via JavaScript?

  • Is testing the user’s bandwidth a priority - so you can see whether their connection can handle high-res images?

  • Can you make use of a third party solution, or do you need to keep is hosted in-house?

There are a lot of solutions out there that have been created to respond to the responsive design dilemma. The following are some of the most exciting solutions that have been developed:

1. PictureFill

PictureFill offers a very simple script that displays adaptive images at page loading time. It does require some JavaScript markup and doesn’t do any bandwidth detection, however.

2. HiSRC

A jQuery plugin that lets you create low, medium and high-res versions of an image. The script detects network speed and retina-readiness to show the most appropriate version. You will need to be using jQuery however, and it requires customer markup in the HTML.

3. Adaptive Images 

Adaptive Images is a largely server-side solution. You’ll need to install it (which can take a good while). However, once installed and configured, the PHP script will resize any image for you. It doesn’t, however, detect bandwidth and doesn’t solve art-direction problems.

Other solutions out there include:

Knowing how to do responsive web design is increasingly important in today’s world of diverse devices. Understanding what your client wants from their web pages should help you select the most appropriate tool.  

Create high-performance, touch-first, responsive apps with AngularJS directives, Bootstrap support and Microsoft MVC server-side widgets. Download Ignite UI today for your high-demand data needs.

 

Developer News - What's IN with the Infragistics Community? (2/22-3/6)

Easily extend your IDE with Extensibility feature for Visual Studio 2015

$
0
0

 

Visual Studio 2015 is a very advanced IDE with a great number of useful options, but sometimes you may find that there are not enough features to meet your needs. Certain operations may be even more automated or you might prefer to have more project types or languages supported. Sounds familiar? If so, there is an easy way to deal with such situations.

A while back, there were Macros and Add-Ins to make our IDE more developer friendly and meet our needs. In Visual Studio 2015, those ways of extending the environment are not supported anymore; instead there is Visual Studio Extensibility (VSX or VSIX). This possibility was released first in the 2005 version and now is mature enough to give us a great experience with building our own plugins.

Thanks to the Extensibility feature we can extend menus and commands, create customized tool windows, editors, project templates, extend user settings, properties in the Property Window, create Visual Studio Isolated Shell apps and many more. Basically the only limitations are our needs and imagination. In this article we will give you a brief look at the capabilities of Visual Studio Extensions.

To start with extensions development we need to have Visual Studio 2015 with the Visual Studio SDK. You can get Visual Studio Community for free here: Visual Studio Community 2015 Download. During installation, just select Visual Studio Extensibility Tools and it will be installed together with other parts of the IDE.

If you already have Visual Studio 2015 installed just open it, go to File/New Project and expand Visual C#/Extensibility. Choose Visual Studio Extensibility Tools and follow the instructions.

MSBuild extension (automated custom build)

Today we are going to make a simple but still very useful extension which will let us use a custom *.proj file to build solutions using MSBuild. First let’s create a VSIX Project. To do so, go to File/New/Project, and then choose Extensibility/ VSIX Project and specify the name. In this case it will be CustomBuilder (fig.1).

Figure 1

Visual Studio has generated a Getting Started Visual Studio Extensions help page. You can delete these files because you don’t need to use them (unless you’d like to read it). Delete index.html and stylesheet.css but keep source.extension.vsixmanifest– we’re going to use it later.

Add Package

Now we need to add a package, by adding new item to our project (fig.2).

Figure 2

And in the Add New Item window select Visual C# Items/Extensibility/VSPackage and create new Visual Studio Package. Name it CustomBuilderPackage.cs.(fig 3)

Figure 3

Add Command

We need one more item in the project – command. To add it, complete the same steps as for package, but in the Add New Item window choose Custom Command instead of Visual Studio Package. The name will be CustomBuildCommand.cs (fig 4)

Figure 4

Loading package after opening solution

We want our package to be available only while a solution is opened. To restrict the user from using the option, add attribute ProvideAutoLoad to the CustomBuildPackage class and add the following code before the class definition (fig 5).

[ProvideAutoLoad(Microsoft.VisualStudio.Shell.Interop.UIContextGuids.SolutionExists)]

public sealed class CustomBuildPackage : Package

{

    //...

}

Figure 5

 

In CustomBuildPackage.vsct set the command flag to DefaultDisabled in tag: Button (fig. 6):

<Buttons>

 

  <Button guid="guidCustomBuildPackageCmdSet" id="CustomBuildCommandId" priority="0x0100" type="Button">

 

    <Parent guid="guidCustomBuildPackageCmdSet" id="MyMenuGroup" />

 

    <Icon guid="guidImages" id="bmpPic1" />

 

    <CommandFlag>DefaultDisabled</CommandFlag>

 

    <Strings>

 

      <ButtonText>Invoke CustomBuildCommand</ButtonText>

 

    </Strings>

 

  </Button>

 

</Buttons>

Figure 6

 

Command button layout and shortcut key

To customize the appearance of our command you can specify some layout options in CustomBuildPackage.vsct file. Change <ButtonText> tag to set the text that will be displayed in the Tools menu.

Icon

You can also add an icon to distinguish your plugin. First add <GuidSymbol> tag to

<GuidSymbol name="cmdIcon" value="{9194BBE0-78F3-45F7-AA25-E4F5FF6D10F9}">    <IDSymbol name="commandIcon" value="1" /></GuidSymbol>

Figure 7

 

To generate a number to the value attribute, open the Tools menu and select Generate GUID. Now click 5th format, Copy generated GUID, Exit the tool and paste it to your code (fig. 7).

Figure 8

You have to specify which picture you want to use. Put a *.png file in Resources folder under your project directory (the icon should be 16x16 px) and add this code to <Bitmaps> section.

<Bitmaps>  <Bitmap guid="cmdIcon" href="Resources\commandIcon.png" usedList="commandIcon"/></Bitmaps>

 

Now change <Icon> tag to refer to the right file:

<Icon guid="cmdIcon" id="commandIcon" />

 

Shortcut Key

To make the tool more user-friendly you can add a shortcut key for it. All you need to add is 3 lines of code just before the <Symbols> tag:

<KeyBindings>  <KeyBinding guid="guidCustomBuildPackageCmdSet" id="CustomBuildCommandId" editor="guidVSStd97" key1="1" mod1="CONTROL"/></KeyBindings>

 

Where key1 specifies an alphanumeric key or VK_constants, mod1 is any key from: CTRL, ALT, and SHIFT.

Now our command should look like Fig. 9.

Figure 9

MSBuilder logic

The extension looks like it supposed to, but it does nothing yet. To make it work, add the following class:

using System.IO;namespace CustomBuilder

{

  public class CustomMsBuilder

  {

    private string solutionPath;    public string msBuildLocation

    {

      get

      {

        var currentRuntimeDirectory = System.Runtime.InteropServices.RuntimeEnvironment.GetRuntimeDirectory();        return System.IO.Path.Combine(currentRuntimeDirectory, "msbuild.exe");

      }

      private set

      { }
    }

    public CustomMsBuilder(string solutionPath)

    {

      this.solutionPath = solutionPath;

    }

    public string BuildSolution()

    {

      return FormatOutput(StartMsBuildWithOutputString());

    }

    private string StartMsBuildWithOutputString()

    {

      var outputString = "";      using (var customBuilder = GetMsBuildProcess())

      {

        var standardOutput = new System.Text.StringBuilder();        while (!customBuilder.HasExited)

        {
          standardOutput.Append(customBuilder.StandardOutput.ReadToEnd());
        }
        outputString = standardOutput.ToString();
      }

      return outputString;

    }

    private System.Diagnostics.Process GetMsBuildProcess()

    {

      var startInfo = new System.Diagnostics.ProcessStartInfo(msBuildLocation, solutionPath);      startInfo.RedirectStandardOutput = true;      startInfo.UseShellExecute = false;      return System.Diagnostics.Process.Start(startInfo);

    }

    private string FormatOutput(string processedOutput)

    {

      string solutionName = Path.GetFileName(solutionPath);      var header = "CustomBuilder - Build " + solutionName + "\r\n--\r\n";      return header + processedOutput;

    }
  }
}
 

This class runs the MSBuild.exe process, builds the opened solution from the custom *.proj file (if the project contains any), formats and redirects output of the MSBuild.exe process, which can displayed it to user from our extension.

The constructor of the class accepts a string with solution’s path and stores it in a field, so it can read it later in a suitable method.

The public method BuildSolution() gets the right MSBuild path (using getter of a property msBuildLocation), starts the msbuild.exe process with the solution path as a parameter, reads the output from console (using string builder), and returns formatted result – with a header "CustomBuilder – Build - <solution name>”.

After the CustomMSBuilder class is finished, it should be called from the CustomBuildCommand. In CustomBuildCommand you have to update the callback function as shown above:

 

Add a using:

using EnvDTE80;

 

 

Change the callback name:

private CustomBuildCommand(Package package)

{

     //...       var menuItem = new MenuCommand(this.CustomBuildCallback, menuCommandID);

       commandService.AddCommand(menuItem);
     }
}

 

Change the callback function and add an additional one:

private void CustomBuildCallback(object sender, EventArgs e)

{

      var cMsBuilder = new CustomMsBuilder(GetSolutionPath());      var outputMessage = cMsBuilder.BuildSolution();

      WriteToOutputWindow(outputMessage); //displays the output – we’ll create this method in next step

}

public string GetSolutionPath()

{

      DTE2 dte = ServiceProvider.GetService(typeof(SDTE)) as DTE2;      return dte.Solution.FullName ?? "";

}

 

Output

We can display the result in the output window (the same way Visual Studio informs whether the build solution succeed or not). In order to do that, add the following code to the CustomBuildCommand.cs file (just below existing methods).

private void WriteToOutputWindow(string message)

{

    var pane = GetOutputPane(PaneGuid, "CustomBuilder Output", true, true, message);    pane.OutputString(message + "\n--");    pane.Activate();        // Activates th new pane to show the output we just add.

}
 

private IVsOutputWindowPane GetOutputPane(Guid paneGuid, string title, bool visible, bool clearWithSolution, string message)

{

    IVsOutputWindow output = (IVsOutputWindow)ServiceProvider.GetService(typeof(SVsOutputWindow));    IVsOutputWindowPane pane;     output.CreatePane(ref paneGuid, title, Convert.ToInt32(visible), Convert.ToInt32(clearWithSolution));     output.GetPane(ref paneGuid, out pane);    return pane;

}

 

We need also to generate new guid and assign it to a variable at the beginning of our file:

namespace CustomBuilder

{

  internal sealed class CustomBuildCommand

  {

  //...    public static readonly Guid CommandSet = new Guid("84a7d8e5-400d-40d4-8d92-290975ef8117");  //...

  }
}

Distribution

After you’re done with the development of your extension and you’re sure it works fine (double check!) you can share the extension easily with other developers. First you have to open source.extension.vsixmanifest and specify some information about your extension. You should fill in all metadata information, target versions of Visual Studio, dependencies and other known information.

Figure 10

There are two supported ways to distribute your extension. First you can just share the *.vsix binary file of your Visual Studio Extension – e.g. send it by email, send a link to ftp/cloud or distribute as you wish. You can find the file in the bin/Release folder of the solution (if you built a release version of your extension). All the recipients will have to do is to download the file, close Visual Studio, double click on the file and use the installation wizard which is straight forward.

Visual Studio Gallery

If you want to reach a larger number of Visual Studio users, you can also publish the extension on the Visual Studio Gallery. To do so requires a few steps:

Sign in to Visual Studio Gallery website with your Microsoft account. Choose “Upload” option on the screen and create an MSDN Profile if you don’t have any. Specify your Display Name, agree to the Terms of Use and click the Continue button.

On the next few screens you have to input some information about your extension such as Extension type– whether it is a tool, control, template or storyboard shape (which applies only to PowerPoint, so not here).

After you have specified the type of your plugin, you can choose where you will store the *.vsix file. You can upload it to Microsoft servers or share a link to some custom location in the Internet (e.g. if you have your own server).

After you have uploaded (or linked to) the proper extension, you can add Basic information. Some of this is filled in automatically based on our source.extension.vsixmanifest from the project, like the title, version or summary. You can choose the category and add some tags to help users to find your extension easily. In the Cost category you can specify whether you want to sell your extension (Trial or Paid option) or provide it for Free. A really nice feature included here is the option to share a link to your code repository if you want to let users to browse through your code.

You have to also provide more information about your plugin. In the Description field you can use given templates or create your own document about your extension. Videos and images can be included so you can fully present your plugin.

Later you have to agree to the Contribution Agreement by checking the box and click Create contribution button.

After saving process your extension is not published yet, but you can Edit, Delete, Translate or Publish your project to finally make it live.

Now it’s easy to find your plugin in the Visual Studio Gallery website or in Extension and Updates in your Visual Studio by typing in the name or keywords which were specified while uploading your project.

Summary

The extensibility feature in Visual Studio 2015 is very easy and straight forward, as you can see. You create a project as you’d create a simple console application in Visual Studio, and you can share it too, easily extending the functionality of your IDE.

 

Want to build your desktop, mobile or web applications with high-performance controls? Download Ultimate Free trial today or contact us and see what it can do for you.

Code faster with less bugs

$
0
0

Feeling anxious about your code? Does it seem as if your boss is always waiting around on you, breathing down your neck? Are you worried your colleagues are wondering why you’re taking so long? Well don’t worry, you’re not the only one!

Any developer worth his or her salt will have gone through the exact same thing. Coding is a creative endeavor; you’re not on a production line and your job isn’t about producing the same thing again and again. Just like music, writing or acting, there’s often not a ‘right’ way of getting to a functional end product, although there are a lot of ways of doing it better – and faster.

Of course, if you’re writing quality, well tested and unbuggy, easy to maintain code, you’re actually going to save yourself and your colleagues a lot of effort down the line. However, even if you’re doing all the best groundwork, the pressure to do more, faster is always going to be there.

No one likes to feel as if they’re holding the team back – it can feel embarrassing and frustrating. Fortunately, there are quite a few simple, pragmatic steps you can take to code faster and with less bugs. Reading this post shows you’re taking a positive approach to your work, so cut yourself some slack, you’re on the right path!

We spoke to our highly experienced team of developers here at Infragistics to see what advice they’d give.

1. Learn from more experienced developers

As with many problems in life, you’re probably not the first person to have had this issue. That should feel like a relief - knowing other people have struggled with coding fast and have gotten better shows you can too. Many of our devs told us they’d really benefited from shadowing more experienced programmers themselves. Working on a project with a seasoned pro will help you pick up a lot of tricks of the trade. You can see how they approach a problem, what code they reuse and test bugs.

Taking this same approach online, and sites like StackOverFlow are immense resources where intelligently asked questions are met with thoughtful answers.

2. Are you doing unnecessary work?

As we stated, coding is creative, with many ways to solve many problems. But for common tasks and issues it is often the case that someone has solved the issue before (and potentially more elegantly). Again the web is the developers’ friend. Sites like C# Design Patterns offer good solutions to common problems. More general patterns are offered by sites like TutsPlus that solving common programing problems.

Building on this theme every developer can code faster with less bugs by using code libraries. These are collections of prebuilt code that perform set functions. Using libraries in your code is even better than using patterns, as the code is right there. JavaScript is a great example of a language that has many excellent libraries that fulfill many useful purposes.

3. Don’t code, plan

That’s right, if you want to code faster with fewer bugs, stop coding. Using libraries and patterns like those described above is one route. Another is to stop coding altogether and plan. You can cut the development time of app by building with prototyping tools. Indigo Studio, our UX prototyping tool saves developers a lot of time by helping them build a working prototype of their app without writing a single line of code. You get from idea to finished product in way less time.

4. Don’t replicate your code across platforms

When you’re building an app for multiple operating systems, our developers recommend a platform like Xamarin. Xamarin speeds up your coding time by letting you build your app one time in C# but deploy it rapidly across iOS and Android. You save a lot of time and energy replicating your app before deploying to stores, compared to writing (and supporting) fresh code for each platform.

5. Objectively measure how you spend your time

Our team also recommended you start measuring your own productivity. This is a little like doing your own experiment; spend a week with pen and paper by your desk and simply track how much time you spend on different jobs throughout the day. At the end of the week you’ll have a quite a clear diary of how you actually approach the working week – you might be shocked by the amount of time you spend off-task. Replying to emails, attending meetings or whatever. You may also realize you’re not actually coding slowly but are instead spending far too much time on some unnecessary task. You can then identify your weak points and work on reducing time lost in these areas.

If you take some of these steps and implement them in your day to day working practices, you should start to notice gradual improvements. We’re not promising instant miracles, but taking a pragmatic approach will help you to minimize your weaknesses and accentuate your strengths. Good luck!


Bring high volumes of complex information to life with Infragistics WPF powerful data visualization capabilities! Download free trial or contact us today.

 


Building a Real time application with SignalR – Part 2

$
0
0

 

 

This post is in continuation of my previous post where we discussed the needs, basics, configurations and transport mechanism of SignalR. In this post we will take this a step further by creating a sample application and analyze it further. Broadly, this post can be divided into two major parts: In the first part, we will be creating a sample and in the other, we will see how SignalR works in various environments and look into the communication request between client and server. To get a bit of background, check out Part 1 of this series here.

 

Working on the Sample

Server monitoring is one of the important tasks that we perform several times in various scenarios. One simple way to do this is via remote login to the server to monitor - or you can also use a client which connects the server, pulls out the required performance counters and displays that information accordingly.

In this example we are going to create a Server monitoring application which will show server resources utilization in real time. Our sample will be a basic one as our goal is to explain SignalR, and we will be reading some basic counters of the server and displaying that info on the UI. The UI will be updated every second, which is implemented via a timer on the server that pushes the data to the client every second.

We have already seen the required configuration in our previous post, so I’ve created an empty ASP.NET application and completed following steps:

        Installed SignalR Nuget package

        Added Startup class and added Configuration method

        Added an HTML file as Index.HTML and added jQuery, jQuery SignalR and Hub proxy.

The core part of any SignalR application is the HUB. There are two key things we will be doing here. First, reading the server resource counter, and second, pushing this information to the connected clients after a certain interval (in this example, 1 second). So let’s see the hub implementation:

        [HubMethodName("sendSeverDetails")]

 

        publicvoid SendSeverDetails()

        {

            string processorTime;

 

            string memoryUsage;

 

            string diskReadperSec;

 

            string diskWriteperSec;

 

            string diskTransferperSec;

 

            // Getting the server counters

 

            GetServerUsageDetails(out processorTime, out memoryUsage, out

                diskReadperSec, out diskWriteperSec, out diskTransferperSec);

 

            // Broadcasting the counters to the connected clients

            Clients.All.SendSeverDetails(processorTime, memoryUsage,

                diskReadperSec, diskWriteperSec, diskTransferperSec, DateTime.Now.ToString());

        }

This is the core hub method. Here we are getting the server’s counters and then calling the client call back function and passing all the parameters. In this example, we are passing different parameters for each counter value, but we can also create an instance and pass a JSON string. As we need to keep updating the client with the latest counter, we need to call this method at certain intervals (here let’s say 1 second). This can be achieved via a timer as such:

       static Timer myTimer;

 

       privatereadonlyTimeSpan _updateInterval = TimeSpan.FromMilliseconds(1000);

 

       public ServerDetailsHub()

        {

            myTimer = new System.Threading.Timer(UpdateUI, null, _updateInterval, _updateInterval);

           

            // Rest of the code removed for brevity. Download the solution for complete code

        }

// This is called via Timer after certain interval which inturn calls the core hub method

 

        privatevoid UpdateUI(object state)

        {

            SendSeverDetails();

        }

Now let’s see our client side, where we have defined the call back method.

$(function () {

 

    var num = 1;

 

    // Declare a proxy to reference the hub.

 

    var hubProxy = $.connection.serverDetailsHub;

 

    // Create a function that the hub can call to broadcast messages.

 

    hubProxy.client.sendSeverDetails = function (processorTime, memoryUsage, diskReadperSec, diskWriteperSec, diskTransferperSec, when) {

 

        $('#tblServerUsage tr').last().after("<tr><td>" + num++ + "</td><td>" + processorTime + "</td><td>" + memoryUsage + "</td><td>"

            + diskReadperSec + "</td><td>" + diskWriteperSec + "</td><td>" + diskTransferperSec + "</td><td>" + when + "</td></tr>");

                 

    };

 

    // Start the connection.

 

    $.connection.hub.start();

 

});

Note – you can download the complete code for this example here .

In the above code, there are three things we are doing. First, creating the hub proxy; second, we defined the call back method which is called from the server and takes the same number of parameters; and third - which is another very important step - starting the hub. These steps do the negotiation with the server and create a persistent connection with server. In this demo, I am just adding a new row whenever the new data is received from the server.

Analyzing the application

SignalR creates a proxy JavaScript at run time which is used to create the proxy instance and establish the connection from the server. It can be seen if we navigate the proxy URL as follows:

In my previous post, we discussed that SignalR is capable of using multiple transport mechanisms and based on the scenario, it chooses one of the best options. So let’s see how the negotiation happens before selecting one.

The above traffic is captured while running the sample on IE 11. After downloading the required scripts, it downloads the hubs (which is the proxy we discussed above). Then you’ll see the red encircled area where it sends the negotiation request to the server. Based on that in the next request the webSockets transport gets selected. There are some other data with the request as well, like connection token, which is unique per client.

Let’s observe in the same application in a different environment:

These details were captured in Chrome, and here we see that apart from the common requests, it sends the negotiate request and chooses serverSentEevents as the transport and starts the request using the selected transport. Let’s see one more scenario:

Via IE9, we got three requests similar to those above, except the transport selected was foreverFrame, which starts the connection.

We see that based on the negotiation request, SignalR chooses one of the best options - and except for Web Socket, it requires one more request to start the connection.

Limiting the Transport Protocols

SignalR is very flexible and allows many configuration options based on need. We can configure to a specific transport or we can even provide the fallback in a specific order. This also helps in reducing the initial timeframe to start the connection as SignalR already knows which transport protocol to be used. We can provide any specific transport while starting the hub because it is the function which decides the selection of Protocol.

$.connection.hub.start({ transport: 'foreverFrame' });

We provide also the fallback options as

$.connection.hub.start({ transport: ['foreverFrame', 'longPolling'] });

Note –Similar to $.connection.hub.start(), the proxy also provides another function to stop the persistent connection as $.connection.hub.stop() and once it is called, we need to start the hub again to continue the communication between client and server.

Conclusion

In this post, we have created a server monitor sample, where the server pushes the server usage counter details at a certain interval to all the connected clients. We used a timer which after a certain interval raises an event, which first collects the counters and broadcasts to the connected client.

We also looked into the developer tools to examine various transport protocols used in various scenarios and saw that the same application uses different protocols based on the negotiations. We also saw how to narrow down the transport or even provide the specific protocol which reduces initial overhead.

I hope you enjoyed this post, and thanks for reading!

 

Create modern Web apps for any scenario with your favorite frameworks. Download Ignite UI today and experience the power of Infragistics jQuery controls.

From ASP.NET WebForms to modern web technologies

$
0
0

You may have come across articles about the end of support for ASP.NET WebForms and how you should consider to start using ASP.NET MVC over ASP.NET WebForms. This topic is about components from each of the ASP.NET framework programming models, or to be more concrete, why you would consider using Ignite UI based grid widgets over the Aikido based grid controls.

Before we go further, I wanted to let you know that we’re not comparing the two Microsoft’s web application framework models. Below you will read about the main differences between them, but keep in mind that each can be “the best choice” for a particular solution depending on the requirements of the application and the skills of the team members involved. You can build great apps with either and bad apps with either.

How Aikido grids work?

Like all WebForms controls, our Aikido Grids are server-based controls. All of the core features like Cell/Row editing, Sorting, Filtering, Virtual Scrolling, Paging and Active cell changing (Activation) require postback to the server to be performed in order to sync state between the client and server, and to retrieve (operate with) the data which will be rendered in the grid. While we have used everything at our disposal to guarantee outstanding performance for our ASP.NET Grids, the sole requirement of constant postbacks and maintaining the entire control’s state between client and server, can easily become a bandwidth or performance problem with very complex forms.

The goal of this topic is not to drive you away from the Aikido grids, instead we want to show you another perspective on how to implement, present and manipulate tabular data, with modern web technologies.

Why choose the Ignite UI grids?

With Ignite UI you can create new generation, modern, client framework based on jQuery UI. Its whole lifecycle is on the client-side, which makes it independent from server-side technology, and also is very important when comes to large and demanding apps. Ignite UI grids are built with performance as a core feature and all its features makes it super simple to use and maintain.

Let me highlight some of them:

  • Support binding to various types of data sources including JSON, XML, HTML tables, WebAPI/RESTful services, JSONP, Arrays and OData combined (example)
  • Adds features like local and remote sorting and filtering, codeless.
  • Column hiding, resizing, summaries, fixing, grouping, templating, multi-column headers, sorting, unbound columns, or with one word Column Management Features (example)
  • Easy to use selection features (example)
  • Multiple templating engine integrations (example)
  • Cell Merging (example)
  • Easy export to excel (example)
  • Responsive Web Design mode (RWD) (example)
  • js support (example)
  • Angular JS support (example)
  • Virtualization (fixed and continuous) (example)
  • Append Rows on Demand feature (example)
  • Displays data in a tree-like tabular structure, not only in hierarchical data with multiple levels and layouts (example)

 

While using any of the Ignite UI grids, alone in html page or in MVC project, you will notice the full control over the HTML, RESTful services, routing features, extensible and maintainable project architecture, reduced page size, parallel development support and extensibility.

You should also keep in mind that ASP.NET Web Forms is not going to be part of ASP.NET 5 (ASP.NET Core 1.0). You will be able to continue build Web Forms apps in VS2015 by targeting the .NET 4.6 framework, however, Web Forms apps cannot take advantage of any of the new features of ASP.NET 5. None of this is certain, though.

Currently, one of the certain things is that the jQuery JavaScript library has become one of the most widely-used JavaScript libraries. jQuery’s browser abstraction layer along with the built-in DOM query engine make the library a solid platform for constructing UI components and controls. Built on top of the jQuery core, the jQuery UI library provides widgets and interactions for designing highly interactive and responsive web user interfaces.

References:

MVC vs. WebForms, A Clear Loser Emerging

Using IgniteUI and ASP.NET MVC

How to Keep Your Field and Office Teams Organized and Productive?

$
0
0

For many years, organizations have struggled to manage the challenges associated with geographically dispersed teams. How can your company stay focused and united when teams around the country or even across the world rarely work face to face?

There have been a whole range of solutions and technologies intended to overcome this challenge. In the past, geographically spread organizations depended on the post, fax, newsletters and telephone calls.

Things have gotten a whole lot easier since the widespread adoption of the Internet however, and this has resulted in a growth in teleworking in the US:

Fig 1. Total U.S. Teleworkers

Source: Global Workplace Analytics 2015

This is good news for teleworkers and your business. Recent research shows teleworkers are not only more productive, but they’re more likely to work even when they’re sick. However, telework usually involves employees working in their homes, but what about field workers?

Field workers are, by definition, away from their desks and, typically, dependable on an Internet connection. If you’ve ever worked in a team where field workers and office workers are in different locations, you’ll appreciate this can lead to quite a lot of misunderstandings, conflict and even distrust.

Why the head office/field work dynamic is so tricky

There are a number of challenges you need to bear in mind when working in a team with colleagues in a central office managing or working in conjunction with those in the field:

  • Resentment: field workers may feel they are doing ‘real work’, while the ‘paper pushers’ in head office order them around
  • Distrust: the two teams work in very different ways and as a consequence misunderstand how the other works
  • Misunderstandings: field workers and office workers are rarely in the same place at the same time, and this means they ‘cross wires’, misunderstanding what the other team is doing and what is expected of them and what the vision for a piece of work is
  • Lack of communications: field workers by definition, often don’t have instant access to email, or even telephone connections. As a result the two teams can struggle to stay up to date

The head office to field work dynamic can be really tough. However, there is a number of things you can do to alleviate these issues and make the relationship flourish.

1. Promote a team identity

Researchers at Stanford have shown how teams who feel that they are working to solve a problem together – rather than simply doing their own separate tasks – are much more motivated. So, how can this be achieved when you’re not in the same room – or even the same country?

Above all, your colleagues need to be kept in touch. When they can’t be in the same physical place as one another, providing access to enterprise social tools like Yammer or Office 365 Groups means individuals feel they’re part of a group working towards a larger goal. Giving field workers access to such software is therefore key.

2. An Intranet

Intranets might not be the sexiest tools in the world, but they’re still amazingly powerful. Providing common access to a SharePoint library for both teams via a mobile phone or tablet app means they can stay up to date with changes to documents and avoid misunderstandings.

3. Conference calls and webcams

With teams working far apart from one another, it can be hard to build a sense of trust and common purpose. While it’s tempting to simply email colleagues in the field, this will only lead to long, frustrating email threads.

If you arm fieldworkers with mobile devices that allow them to carry out video calls with colleagues, this simple face to face contact can create far more trust and facilitate understanding – and achieve far more than a long email chain.

4. A team charter

Management website Mind Tools recommends creating a team charter, so disparate teams have a common purpose. A team charter allows everyone to agree to a set of tasks, roles and responsibilities, and ensures everyone is ‘on the same page’ as to how they should behave and the goals they need to achieve.

5. Facilitate work anytime, anywhere

By its nature, field work does sometimes mean employees are without an Internet connection. This should not hinder their ability to be productive however. Ensuring they can sync files to their devices and work on these when they need is crucial. Whether it’s blueprints, a sales document or architectural plans, field workers need to be able to edit documents on the go and share these with office-bound colleagues once Internet connection is restored.

The mobile workforce

With field and office workers often struggling to ‘stay on the same page’, it’s crucial that you facilitate this using best practices and technology.

SharePlus allows field workers to connect to your company’s SharePoint and Office 365 environments, communicate with colleagues and collaborate on documents from multiple file sources on premise or in the Cloud while on the go. Even when there is no Internet connection, field workers can edit documents before syncing their changes to SharePoint later on.

To find out more about how SharePlus could configure to your field workers and their needs, contact us today – we’d love to hear from you. 

Windows File Share (Network Drives) Now Accessible on Mobile with SharePlus

$
0
0

Shared Drives across a network are an essential, secure way of storing and sharing  documents; it is the preferred method of collaboration by businesses, colleges and government agencies. Unlike Cloud services, where there is usually one account per individual, there’s no limit on the number of Shared Drives you or your colleagues might have.

However, there are some limits. Businesses usually restrict the access to Network Drives; in some cases, users are discouraged from storing files outside their internal Drives. This might be convenient security-wise, but it poses a tremendous challenge to both mobility and team collaboration. Online guides with instructions on how to access your Drive from a mobile device never involve less than 7 or 8 steps, and none of those steps are easy breezy. Once again, SharePlus simplifies the process for you.

Access your Network Drive in less than three steps!

Accessing your Network Drive is an easy two-step process and doesn’t involve any tweaking. Simply go to the Documents tab in SharePlus, tap “+” and enter your details.

Take advantage of mobilizing your Shared Drives

Avoid having all of your data stuck in one single place with restricted access. Here are a few reasons why you should mobilize your Network Drives with SharePlus:

  • It allows for centralized access to all storages. Aside from your Shared Drive, you can also access all your other storage services (including Dropbox, Google Drive, OneDrive for Business and, of course, SharePoint).
  • Importing files is easy and quick. Once your Network Drive has been configured, you can import files from your device’s Local Files and Photo Albums. You can also create new files using Audio Recordings and your Camera.

Team collaboration made easy by SharePlus

It has become common practice for businesses to require that their users save their information in an individual Shared Drive. Some of them will have a small C:\ Drive that allows temporary saving but gets wiped daily. With no access to a truly shared service, sharing data between co-workers can become a grueling task. SharePlus not only lets you configure multiple Shared Drives, but it also lets you edit in real time. When using the built-in iOS editors, you will be able to update your Network Drives with the new, edited document.

In iOS, there is also a built-in PDF annotator that helps you include useful notes or symbols for later reference.

Time to take control: one mobile place for all Drives

Delivering a truly mobile experience, SharePlus helps you work the way you want, keeping you productive and in-sync across your mobile devices.

Interested in mobilizing your Network Drives? Get your Free 30-Day SharePlus Enterprise Demo here!

If you have any questions, would like to give feedback, or schedule a demo, please do not hesitate to contact us. We look forward to hearing from you!

SharePlus Meets Dropbox – Seamless and Unified Mobile Access to Your Information

$
0
0

In addition to SharePoint, on-premises or in the cloud, SharePlus now allows you to access additional storage services, like Dropbox, OneDrive for Business, and Google Drive from your mobile device. Plus, you can access Network Drives (Windows File Share) shared by your colleagues.

Dropbox, one of the most popular cloud storage services is now supported within SharePlus. Why is Dropbox a big deal? There are many reasons, so let’s just mention two: simplicity and a device-agnostic approach. Sounds familiar, right? SharePlus shares the same principles for the sake of good and consistent user experience: SharePlus iOS users find themselves comfortable and familiar with the Android app and vice-versa.

Dropbox Business & SharePlus Enterprise: it’s a match!

With Dropbox Business, Enterprises can take advantage of secure file sharing, administration features, and some collaboration features. Any company taking advantage of Dropbox and using SharePoint will definitely find the integration between SharePlus and Dropbox useful. SharePlus Enterprise provides premium features required by power users, project managers, and IT administrators, successfully meeting the needs of small and large companies alike.

Your Dropbox content now in SharePlus

With SharePlus for iOS’ native previewer, you can take a quick look at any type of file from Dropbox, including Microsoft Office documents, PDF files, and images. When working with PDF files, SharePlus for iOS provides a PDF annotation suite that lets you annotate PDF documents and fill in PDF forms within the app.

When it comes to document, PDF, image or other editing, SharePlus’ smooth integration with third-party apps gives you total flexibility. In the iOS platform, any app that supports incoming and outgoing Open In will do the trick. Just pick the editing app you are most comfortable with or the one allowed by your Enterprise security policies. SharePlus for Enterprise adds many security layers, including app-specific restrictions and feature trimming.

Android users share the same flexibility: once you tap over a file, you can choose to edit it with the help of your preferred editing app on your Android device.

Productivity is not just about doing more

It is about creating more impact with less work. It has always paid to be productive, but nowadays it does more than ever. SharePlus allows you to achieve more with its Dropbox Business integration. You can take advantage of enterprise security and administration features while continuing to work with all your files at your convenience. With SharePlus you can not only mobilize your SharePoint investment, but also enable teams to access content from multiple file sources on premise or in the Cloud.

If you’re interested in learning more about how SharePlus can work with your Dropbox Business account, try out the Free 30-Day SharePlus Enteprise Demo here!

Why Work in Technology? - An Infographic

$
0
0

For several decades now, working "in technology" has been something of a buzzword. It's cool, trendy, and you'll make enough money to last a lifetime, if the rumors are to be believed. But what's true and what isn't? What's the reality of working in technology in 2016? Check out the infographic below to find out!


Share With The Code Below!

<a hhttp://www.infragistics.com/community/cfs-filesystemfile.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/d-coding/5557.Why-Work-In-Technology.jpg"/> </a><br /><br /><br />Why Work in Tech? <a href="http://www.infragistics.com/products/jquery">Infragistics HTML5 Controls</a>

Developer News - What's IN with the Infragistics Community? (3/7-3/20)

$
0
0

This edition of Developer News is chock full of not only things to learn, but WHY you should learn them. If you're considering a new skill but not sure what, or why, check out out one of these articles, selected by the Infragistics Community!!

6. 6 Reasons Web Developers Need to Learn JavaScript ES6 Now (TNW)

5. 9 Skills Every Javascript Developer Should Possess ()

4. Top Programming Languages that will Future Proof Your Portfolio (Information Week)

3. The Six Hottest New Jobs in IT (IT World)

2. These are the 10 Most Demanded Roles for Computer Science Graduates (TechJuice)

1. 24 Best Web Design and Development Tools 2016 (DZine)

24 Best Web Design and Development Tools 2016


SharePlus Now Supports Google Drive

$
0
0

Despite all the hoopla and brouhaha surrounding cloud services, the truth is that storing data in cloud services generally means saving information online using backup services. And while cloud services are very useful, businesses are aware of the perils of having documents strewn across different cloud providers. Every cloud user has experienced at least one fast-paced, panic-driven search for a document through multiple systems. And with the proliferation of cloud storages and enterprise software, business users need a mobile application with centralized access to all cloud services they use for work. Once again, SharePlus from Infragistics responds to this need.

Now integrated with Google Drive, the new SharePlus version allows you to access your documents and work without limits from anywhere. All of your work available on your mobile device, ready for you to share and collaborate.

Why you should use Google Drive in SharePlus

SharePlus is a unique platform which lets you mobilize your SharePoint investment and enable teams to access content from multiple file sources on premise or in the Cloud on their devices. While it previously only offered integration with Microsoft SharePoint, the stakes have been raised by expanding its integrations to include Google Drive, Dropbox, Network Drives and OneDrive for Business

SharePlus offers a native previewer to display your documents, and also relies on 3rd party applications to edit them. Google’s Docs, Sheets and Slides offer the perfect opportunity for team collaboration using SharePlus’ “Open In” functionality.

Changes to documents will be automatically synced when using Google Apps via SharePlus

You simply need to add Google Drive as a content source inside SharePlus, and you’ll have your SharePoint and Google Drive files in a centralized mobile space! When editing your Google Drive documents inside SharePlus, your experience is enhanced by the auto-saving and auto-syncing capabilities of the Google apps which send your updates directly into Google Drive (including updates to your Starred documents). In case of any connectivity issues, your Docs, Sheets and Slides can be synchronized when you are able to join a working network.

You will need to have the Google apps (Google Docs, Sheets, and Slides) installed on your device to make sure changes are being updated automatically. If you choose to edit your Google Drive files with the help of other apps, like the Microsoft Office suite, your changes will not be reflected in Google Drive or SharePlus.

Google Drive documents can be easily moved to other file locations in SharePlus

No more scattered documents! With the new version of SharePlus, you will be able to move Google Drive files to your SharePoint portals, Dropbox, OneDrive for Business, or ultimately any Network Drives shared with you. You choose how to organize your information fast and easy.

Interested in trying this out?

Get your Free 30-Day SharePlus Enterprise Demo here!

If you have any questions, would like to give feedback, or schedule a demo, please do not hesitate to contact us. We look forward to hearing from you!


New Solutions to Old JavaScript Problems: 1) Variable Scope

$
0
0

Introduction

I love JavaScript but I'm also well aware that, as a programming language, it's far from perfect. Two excellent books, Douglas Crockford's BLOCKED SCRIPT The Good Parts and David Herman's Effective JavaScript, have helped me a lot with understanding and finding workarounds for some of the weirdest behavior. But Crockford's book is now over seven years old, a very long time in the world of web development. ECMAScript 6 (aka ES6, ECMAScript2015 and several other things), the latest JavaScript standard, offers some new features that allow for simpler solutions to these old problems. I intend to illustrate a few of these features, examples of the problems they solve, and their limitations in this and subsequent articles.

Almost all the new solutions covered in this series also involve new JavaScript syntax, not just additional methods that can be pollyfilled. Because of this, if you want to use them today and have your JavaScript code work across a wide range of browsers then your only real option is to use a transpiler like Babel or Traceur to convert your ES6 to valid ES5. If you're already using a task runner like Gulp or Grunt this may not be too big a deal, but if you only want to write a short script to perform a simple task it may be easier to use old syntax and solutions. Browsers are evolving fast and you can check out which browsers support which new features here. If you're just interested in playing around with new features and experimenting to see how they work, all the code used in this series will work in Chrome Canary.

In this first article I am going to look at variable scope and the new let and const keywords.

The Problem with var

Probably one of the most common sources of bugs (at least it's one I trip over on regularly) in JavaScript is due to the fact that variables declared using the var keyword have global scope or function scope and not block scope. To illustrate why this may be a problem, let's first look at a very basic C++ program:

#include <string>
#include <iostream>

using std::cout;
using std::endl;
using std::string;

int main(){
   string myVariable = "global";
   cout << "1) myVariable is " << myVariable << endl;
   {
      string myVariable = "local";
      cout << "2) myVariable is " << myVariable << endl;
   }
   cout << "3) myVariable is " << myVariable << endl;

   return 0;
}

Compile that program and execute it and it prints out the following:

1) myVariable is global
2) myVariable is local
3) myVariable is global

If you come to JavaScript from C++ (or a similar language that also has block scoping) then you might reasonably expect the code that follows to print out the same message.

var myVariable = "global";
console.log("1) myVariable is " + myVariable);
{
   var myVariable = "local";
   console.log("2) myVariable is " + myVariable);
}
console.log("3) myVariable is " + myVariable);

What it actually prints out is:

1) myVariable is global
2) myVariable is local
3) myVariable is local

The problem is that re-declaring myVariable inside the braces does nothing to change the scope, which remains global throughout. And this isn't just a problem related to braces that are unattached to any flow-control keywords. For example, you'll get the same result with the following minor change:

var myVariable = "global";
console.log("1) myVariable is " + myVariable);
if(true){
   var myVariable = "local";
   console.log("2) myVariable is " + myVariable);
}
console.log("3) myVariable is " + myVariable);

The lack of block scoping can also lead to bugs and confusion when implementing callback functions for events. Because the callback functions are not invoked immediately these bugs can be particularly hard to locate. Suppose you have a set of five buttons inside a form.

<form id="my-form"><button type="button">Button 1</button><button type="button">Button 2</button><button type="button">Button 3</button><button type="button">Button 4</button><button type="button">Button 5</button></form>

You might think the following code would make it so that clicking any of the buttons would bring up an annoying alert dialog box telling you which button number you pressed:

var buttons = document.querySelectorAll("#my-form button");

for(var i=0, n=buttons.length; i<n; i++){
   buttons[i].addEventListener("click", function(evt){
      alert("Hi! I'm button " + (i+1));	
   }, false);
}

In fact, clicking any of the five buttons will bring up an annoying alert dialog box telling you that the button claims to be the mythical button 6.

The issue is that the scope of i is not limited to the (for) block and each callback thinks i has the same value, the value it had when the for loop was terminated.

One solution to this problem is to use an immediately invoked function expression (IIFE) to create a closure, in which the current loop index value is stored, for each iteration of the loop:

for(var i=0, n=buttons.length; i<n; i++){
   (function(index){
      buttons[index].addEventListener("click", function(evt){
         alert("Hi! I'm button " + (index+1));	
      }, false);
   })(i);
}

let and const

ES6 offers a much more elegant solution to the for-loop problem above. Simply swap var for the new let keyword.

for(let i=0, n=buttons.length; i<n; i++){
   buttons[i].addEventListener("click", function(evt){
      alert("Hi! I'm button " + (i+1));	
   }, false);
}

Variables declared using the let keyword are block-scoped and behave much more like variables in languages like C, C++ and Java. Outside of the for loop i doesn't exist, while inside each iteration of the loop there is a fresh binding: the value of i inside each function instance reflects the value from the iteration of the loop in which it was declared, regardless of when it is actually called.

Using let works with the original problem too. The code

let myVariable = "global";
console.log("1) myVariable is " + myVariable);
{
   let myVariable = "local";
   console.log("2) myVariable is " + myVariable);
}
console.log("3) myVariable is " + myVariable);

does indeed give the output

1) myVariable is global
2) myVariable is local
3) myVariable is global

Alongside let, ES6 also introduces const. Like let, const has block scope but the declaration leads to the creation of a "read-only reference to a value". You can't change the value from 7 to 8 or from "Hello" to "Goodbye" or from a Boolean to an array. Consequently, the following throws a TypeError:

for(const i=0, n=buttons.length; i<n; i++){
   buttons[i].addEventListener("click", function(evt){
      alert("Hi! I'm button " + (i+1));	
   }, false);
}

It's important (and perhaps confusing) to note that declaring an object with the const keyword does not make it immutable. You can still change the data stored in an object or array declared with const, you just can't reassign the identifier to some other entity. If you want an object or array that is immutable you need to use Object.freeze (introduced in ES5).

Solve Small Problems

$
0
0

It's fun to think of great moments in the history of science, particularly the ones that have a memorable anecdote attached to them.  In the 3rd century BC, a naked Archimedes ran down a city street, screaming Eureka, because he had discovered, in a flash, how to measure the volume of irregular solids.  In the 1600s, a fateful apple bonks Issac Newton on the head, causing him to spit out the Theory of Gravity.  In the early 1900s, another physicist is sitting around, contemplating the universe, when out pops E=MC^2.

These stories all share two common threads: they're extremely compelling and entirely apocryphal.  As such, they make for great Disney movies, but not such great documentaries.  Point being, we as humans like stories of "eureka moments" and lightning bolt inspiration much better than tales of preparation, steady work, and getting it right on attempt number 2,944, following 2,943 failed attempts.

But it goes beyond just appreciating the former type of story.  We actually manufacture them.  Perhaps the most famous sort of example was Steve Jobs' legendarily coy, "oh yeah, there's one more thing" that preceded the unveiling of some new product or service.  Jobs and Apple were masters of "rabbit from the hat" marketing where they'd reveal some product kept heretofore under wraps as though it were a state secret.  All that is done to create the magic of the grand reveal -- the illusion that a solution to some problem just *poof* appeared out of thin air.

Unrealistic Expectations

With all of this cultural momentum behind the idea, it's easy for us to internalize it.  It's easy for us to look at these folk stories of scientific and product advancement and to assume that not having ideas or pieces of software fall from us, fully formed and intact, constitutes failure.  What's wrong with us?  Why can't we just write that endpoint in one shot, taking into account security, proper API design, backward compatibility, etc?  We're professionals, right?  How can we be failing at this?

You might think that the worst outcome here is the 'failure' and the surrounding feelings of insecurity.  But I would argue that this isn't the case at all.  Just as the popular stories of those historical scientists are not realistic, and just as Apple didn't wave a magic wand and wink an iPod Nano into existence, no programmer thinks everything through, codes it up, and gets it all right and bulletproof from the beginning.  It simply doesn't happen.

Paralysis By Analysis

As such, the worst problem here isn't the 'failure' because there is no failure.  Not really.  The worst problem here is the paralysis by analysis that you tend to face when you're caught in the throes of this mindset.  You're worried that you'll forget something, make a misstep, or head in the wrong direction.  So instead, you sit.  And think.  And go over your options endlessly, not actually doing anything.  That's the problem.

I'm sure there is no shortage of articles that you might find on the internet, suggesting 6 fixes for paralysis by analysis.  I won't treat you to a similar listicle.  Instead, I'll offer one fix.  Solve small problems.

Progress Through Scaling Down

Building a service endpoint with security, scale, backward compatibility, etc, is a bunch of problems, and none of them is especially small.  You know what is a small problem?  (Assuming you're using Visual Studio)  Creating a Visual Studio solution for your project is a small problem.  Adding a Web API project (or whatever) to that solution is a small problem.  Adding a single controller method and routing a GET request to it is a small problem.  Getting into that method in the debugger at runtime is a small problem.  Returning a JSON "Hello World" is a small problem.

If you assembled these into a list, you could imagine a conceptual check mark next to each one.

  • Make solution file.
  • Add project.
  • Add a controller.
  • Hit controller at runtime with GET request.
  • Return JSON from controller.

There's no worrying about aspects, authentication, prior versions, or anything else.  There's only a list of building blocks that are fairly easy to execute, fairly easy to verify, and definitely needed for your project.

The idea here is to get moving -- to build momentum without worry and to move ahead.  There are two key considerations with this approach, and they balance one another.  They're also ranked in order of importance.

  1. Don't let yourself get stuck -- pick a small, needed problem to solve, and solve it.
  2. Do your best to solve your problems in a non-limiting way.

The first is paramount because you're collecting a paycheck to do things, not to do nothing.  Or, to be less flip about it, progress has to come.  The second item is important, but secondary.  Do you best to make progress in ways that won't bite you later.

For an example of the interplay here, consider one of the aspects that I've been mentioning -- say security.  At no point during the series of problems on the current to-do list is security mentioned.  It's not your problem right now -- you'll address it later, when the first step toward security becomes your problem on your list.  But, that doesn't mean that you should do something obtuse in solving your problems or that you should do something that you know will make it harder to implement security later.

For me, various flavors of test-driven development (TDD) are the mechanism by which I accomplish this.  I certainly recommend giving this a shot, but it's not, by any stretch, the only way to do it.  As long as you've always got a way to keep a small achievable task in front of you and to keep from shooting yourself in the foot, your method should work.

The key is to keep moving through your checklist, crossing off items, and earning small wins.  You do this by conceiving of and solving small problems.  If you do this, you may never become Disney Archimedes or Einstein, but you won't have to pull a Steve Jobs magic act during your next performance review to secure a raise against all odds.  You'll already have it in the bag.

Want to build your desktop, mobile or web applications with high-performance controls? Download Ultimate Free trial today or contact us and see what it can do for you.

Ignite UI Release Notes - March 2016: 15.1, 15.2 Service Release

$
0
0

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find the notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in both PDF and Excel formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.

Note: This is the last service release for Ignite UI 2015 Volume 1.

Download the Release Notes

Ignite UI 2015 Volume 1

Ignite UI 2015 Volume 2

Infragistics ASP.NET Release Notes - March 2016: 15.1, 15.2 Service Release

$
0
0

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find the notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release notes are available in both PDF and Excel formats. The PDF summarizes the changes to this release along with a listing of each item. The Excel sheet includes each change item and makes it easy for you to sort, filter and otherwise manipulate the data to your liking.

Note: This is the last service release for ASP.NET 2015 Volume 1.

Download the Release Notes

ASP.NET 2015 Volume 1

ASP.NET 2015 Volume 2

Viewing all 2223 articles
Browse latest View live