Quantcast
Channel: Infragistics Community
Viewing all 2223 articles
Browse latest View live

uxcamp Copenhagen - the topics

$
0
0

Pitching your talk and listening to other amazing people

clip_image003

The #uxcampcph logo. Image attributed to: http://uxcampcph.org/Uploads/UXCampCPH_HVID_transparant.png

This is a continuation of an earlier blog about my experiences at UX Camp CPH 2015 with a focus on the topics presented there.

In a blog last week I tried to explain what lean stands for in a broader sense and to relate the concept to an event that I recently attended. UX Camp Copenhagen is a forum organized in a lean fashion and had Jeff Gothelf, the father of “Lean UX”, as its keynote speaker. In this blog I would like to share a bit more about the conference, the topics I attended and the one that I offered to the other attendants there. I will start with the Friday night to set the mood, continue with a break-the-ice session Saturday morning, followed by the attendant-generated content and end up with Jeff’s closing keynote on Lean UX.

Setting the mood

Friday night began with three invited speakers, who offered very different topics. First, Jonas Priesum from theeyetribe talked about eye-tracking, the science behind it and its related problems, such as how users might visually select items on-screen. Of course, the inevitable discussion of “blink to select” and “dwell to select” spiced up the discussion but it all ended up with a nice overview of the empowering potential of the technology for the hospitalized and the disabled.

Next it was time for Johan Knattrup to talk about the interactive movie experience that his team created using Oculus Rift, called Skammekrogen. They basically directed a 20-minute immersive movie experience that could be lived through the eyes of one of the actors through the use of a virtual reality headset. What was particularly interesting was how their initial screening of the film seemed to doom the whole concept. Movie viewers failed to feel very “immersed” in one particular character. They actually felt alienated throughout the movie when in the shoes of that particular actor. Initially the team’s understanding was that they failed to achieve immersion and all their shooting and directing efforts were in vain. But after more in-depth analysis of their script, they realized that it was actually written such that that this particular character was distant to everyone else. This, it turns out, immersed movie viewers beyond everyone’s initial expectations.

The final speaker of the night was Thomas Madsen-Mygdal, ex-chairman of podio, who spoke about belief. According to Madsen-Mygdal, belief in something is a choice and belief in the power of the Internet 20 years ago was what drove humanity forward. He also suggested that those who ultimately succeed in life are those who believe in seemingly unattainable long-term goals – particularly when the odds are against them. Perhaps the most important thing that got stuck in my mind was the notion of belief as “the most important design tool in life”.

clip_image006clip_image008

Johan Knattrup to the left and Thomas Madsen-Mygdal to the right setting the mood on Friday night. Image attributed to the author.

My take on the whole of Friday night was that I was in the right place. No matter if I were more of a researcher, or an artist, or a philosophical type of person, this was the place and the time for anyone to share anything they were passionate about, regardless of how crazy it might seem.

Breaking the ice

Saturday morning brought to us a hidden gem with Ida Aalen’s talk about The Core Model. I particularly loved the way she “killed” the homepage-first design approach by showing that most of the time we end up on a child page from a Google search or by following a link shared in social media. And if we think about it for a second she is absolutely correct; we rarely see the homepage even if we explored some of the IA of a given website. The framework that she extensively uses and promotes, called The Core Model, is definitely one of the things that I cannot wait to put into practice in my upcoming design challenges.

Talks from the people and for the people

Luckily, all who pitched talks managed to find a slot on the schedule. This highlighted the impressive efforts of the organizers because 27 of us each had thirty seconds, one after another. Once the schedule was ready, I decided to spend my first slot with Nanna and the rest of Think! Digital in a discussion about designing with and for the actual content. We spoke about the importance of getting actual content as early as possible and prototype with it instead of the “Lorem ipsum…” that is so familiar to the design world. Having content early means we decrease the probability that a piece of content will ruin our layout later in the project. Rather, the content becomes a design constraint known from the very beginning.

My second slot was spent with Pascal from ustwo in London. It was probably the most anticipated talk of the day after an amazing pitch and he definitely kept his promise. Pascal spoke about the digital identity that we create through all our gadgets, how they quantify us and the implications of this journal of our life (e.g., ownership, privacy and longevity) as these journals are very likely to outlive us.

The third session on my list was with Steven from Channel 4, another speaker from the UK. He talked about their design process, involving experience maps and user journeys, taking as a case study the launch of his company’s “On Demand” product.

Doing my part

At the end of the day it was time for the talk that I had prepared: “Designing Usable Dashboards”. I picked that topic for two reasons. Firstly, we at Infragistics know how to design usable dashboards. We have demonstrated that on a number of occasions such as the Xamarin Auto Sales Dashboard, Marketing Dashboard, Finance Stocks App, and CashFlow Dashboard to note just some of our latest work. Secondly, I was really inspired by the webinar, How To Design Effective Dashboards, recently presented by Infragistics Director of Design, Tobias Komischke. Despite the fact that my slides had a researcher’s approach to data visualization, the lengthy discussion at the end of the talk left me with the feeling that it quenched the thirst of the crowd for the topic.

clip_image013

Designing Usable Dashboards presentation by the author. Image attributed to the author.

The icing on the cake

There was only one thing standing between us and the beer in the bar, signifying the end of such community-driven forums. It was what turned out to be inarguably the best talk of the whole event – Jeff Gothelf and Lean UX. Originally from New Jersey, where Infragistics’ headquarters are located, he shared his struggle to create a design team in a NYC startup. A team that had to work with the agile software development already established in the company. Jeff shared the ups and downs along the way, and the birth of what he eventually coined “the Lean UX approach”. He spoke about continuous feedback loops, conversations as a core communication medium and the importance of learning and change. He also spoke about how it is crucial to learn whether your assumptions are valid by testing a hypothesis with minimal effort, as quickly as possible. And that once you are better informed, you have to be willing to change and iterate to progress your product forward.

clip_image015

Jeff Gothelf talking about lean UX. Image attributed to the author.

UX Camp Copenhagen, thank you once again for the great event and it was really a pleasure to be part of it. Hope to see you again next year.


Bar Charts versus Dot Plots

$
0
0

Bar charts have a distinctadvantage over chart forms that require area or angle judgements. That's because the simple perceptual tasks we require for decoding a bar chart - judging lengths and/or position along a scale - are tasks we're good at. But we also decode dot plots through judging position along a scale. Is there a reason to choose one over the other?

To explore this question I'm going to create several bar charts and dot plots from a real-world dataset. Specifically we'll be looking at the World Health Organization (WHO) table of life expectancy by country. It covers three different years: 1990, 2000, and 2012 and we'll just look at the life expectancy at birth across both sexes combined. Data is rounded to the nearest whole year.

Let's start by looking at the increase in life expectancy between 1990 and 2012 for 12 of the G-20 nations.

Which chart is better? With the bar chart you can compare lengths as well as position, but if you're an ardent disciple of Edward Tufte then the dot plot has the better data-ink ratio. In addition, one could always change the lines in the dot plot so that they only go from 0 to the position of the dot if one wanted to judge based on length. In the end, I think in this simple case it's probably just a matter of personal preference.

What if, instead of looking at the difference between 2012 and 1990 for each country, we just wanted to show the two corresponding values? In the bar chart case we create a grouped bar chart, in the dot plot case we string two different symbols on each line.

It's easy to compare the two bars from the same country, but if we want to compare across countries for the same year we must ignore the presence of half the bars. Because these bars provide quite a dense concentration of color, this isn't all that easy a task. With the dot plot, comparison for the same country is even easier - we just scan along the same horizontal line. I think comparison between countries for the same year is also simpler, there's no large blocks of color to distract us when we want to compare blue circles to other blue circles or red squares to other red squares.

That covers the most obvious decoding tasks, but can we extract any other insights? I think it's immediately apparent from the dot plot that Turkey has seen the biggest increase in life expectancy (as was obvious when directly plotted in the first example). With the grouped bar chart, that information is there but it is somewhat concealed. Similarly, I think that the fact that the life expectancy in India in 2012 was lower than for most of the listed countries in 1990 is more obvious in the dot plot.

Let's add the middle year of measurement to the chart and see what difference that makes.

Now things look a bit cramped. In the case of the dot plot, for example, there is an overlap between the marker for the year 2000 and one of the other two years in eight of the twelve cases. But we can change things with the dot plot more than we can the bar chart. Assuming we're restricted to the same horizontal and vertical space as above, about the only thing we can do with the bar chart is change the horizontal scale so its maximum coincides with the maximum in the data. But with the dot plot, because line length does not encode anything, we can expand our scale in both horizontal directions to whatever is convenient.

Things are much clearer now in the dot plot while the bar chart is barely any different.

The above discussion gives several reasons for favoring a dot plot over a bar chart. The dataset used is, however, quite well-behaved. Specifically, for each country the life expectancy increased from 1990 to 2000 to 2012. This was not universally the case across the globe. In fact if I'd picked a different sample of twelve countries from the G-20, like the one below, our dataset would not have been so well-behaved.

In the case of South Africa and Russia we have overplotting in the dot plot. That's a problem we can probably deal with. We could use semi-transparent points, for example. The bars of a grouped bar chart do not lie on the same line and so overplotting will never be an issue.

Software Design & Development Conference

$
0
0

I'm heading out tomorrow night to attend and speak at Software Design & Development which is a yearly conference in London, UK. My talk is on May 14th and is called "Assessing UX" and provides a 360-degree view on what the different dimensions of user experience are, and what concrete things you want to look for when assessing these dimensions. Free tools are presented that help to check concepts and products for their UX quality. I involve the audience in a live usability test demonstration as well as a 5-minute Q&A period right at the end of the presentation. Should be fun!

UXify Animating Name Badges

$
0
0

In case you missed out on UXify 2015 last month, check out the recent Infragistics blog UXify North America – Conference Videos for all 8 presentations covering “The Future of UX Design”.

UXifyNameBadge

In addition to an afternoon of free lectures, conference goers also received interactive animating name badges. At first glance, the name badge appears to be the attendee’s name printed on a card along with an abstract design. But with the addition of a second transparent card overlaying the image, the design comes to life.

The name badge uses a method of animation known as “scanimation”. A six frame animation is combined into a single abstract image. By moving a striped acetate overlay across the image, the viewer is only able to see one frame at a time. As the frames are quickly strung together, the once static image creates the illusion of movement.

Try the animation for yourself using this interactive prototype:http://indigodesigned.com/share/7qn4datqwwqu

Interested in sharing your own prototypes? Check out the all new platform for sharing Indigo Studio prototypes: IndigoDesigned.com

NucliOS Release Notes - May: 14.2.331, 15.1.70 Service Release

$
0
0

Introduction

With every release comes a set of release notes that reflects the state of resolved bugs and new additions from the previous release. You’ll find the notes useful to help determine the resolution of existing issues from a past release and as a means of determining where to test your applications when upgrading from one version to the next.

Release Notes: NucliOS 2014 Volume 2 Build 331 and NucliOS 2015 Volume 1 Build 70

ComponentProduct ImpactDescriptionService Release
IGChartViewBug Fix

The first and last points are cropped in the OHLC and Candlestick series.

Note: Added useClusteringMode to category axis. Setting this property to true will stop cutting off half of the first and last data points in financial price series.

14.2.331, 15.1.70
IGSparklineViewBug Fix

Sparkline as a line is closing its geometry path.

Note: Fixed line-type sparkline rendering a filled polygon instead of a polyline.

14.2.331, 15.1.70

By Torrey Betts

How to use AngularJS in ASP.NET MVC and Entity Framework

$
0
0

These days, it seems like everyone is talking about AngularJS and ASP.NET MVC. So in this post we will learn how to combine the best of both worlds and use the goodness of AngularJS in ASP.NET MVC by demonstrating how to use AngularJS in an ASP.NET MVC application. Later in the post, we will see how to access data using the Entity Framework database as a first approach, then we’ll explore how to access the data in AngularJS and then pass to the view using the controller. In short, this post will touch upon:

·         adding an AngularJS library in ASP.NET MVC;

·         reference of AngularJS, bundling and minification;

·         fetching data using the Entity Framework database;

·         returning JSON data from an ASP.NET controller;

·         consuming JSON data in an AngularJS service;

·         using AngularJS service in AngularJS controller to pass data to the view; and

·         rendering data on an AngularJS View

To start, let’s create ASP.NET MVC application and right click on the MVC project. From the context menu, click on Manage Nuget Package. Search for the AngularJS package and install into the project.

 

After successfully adding the AnngularJS library, you can find those files inside the Scripts folders.

Reference of AngularJS library

You have two options to add an AngularJS library reference in the project: MVC minification and bundling or by adding AngularJS in the Script section of an individual view. If you use bundling, then AngularJS will be available in the whole project. However you have the option to use AngularJS on a particular view as well.

Let’s say you want to use AngularJS on a particular view (Index.cshtml) of the Home controller. First you need to refer to the AngularJS library inside the scripts section as shown below:

@section scripts{

    <scriptsrc="~/Scripts/angular.js">script>

}

 

Next, apply the ng-app directive and any other required directives on the HTML element as shown below:

<divng-app=""class="row">

     <inputtype="text"ng-model="name"/>

     {{name}}

div>

 

When you run the application you will find AngularJS is up and running in the Index view. In this approach you will not be able to use AngularJS on the other views because the AngularJS library is only referenced in the Index view.

You may have a requirement to use AngularJS in the whole MVC application. In this case, it’s better to use bundling and minification of MVC and all the AngularJS libraries at the layout level. To do this, open BundleConfig.cs from the App_Start folder and add a bundle for the AngularJS library as shown below:

 

  publicstaticvoid RegisterBundles(BundleCollection bundles)

        {

            bundles.Add(newScriptBundle("~/bundles/angular").Include(

                        "~/Scripts/angular.js"));

 

After adding the bundle in the BundleConfig file, next you need to add the AngularJS bundle in the _Layout.cshtml as listed below:

<head>

    <metacharset="utf-8"/>

    <metaname="viewport"content="width=device-width, initial-scale=1.0">

    <title>@ViewBag.Title - My ASP.NET Applicationtitle>

    @Styles.Render("~/Content/css")

    @Scripts.Render("~/bundles/modernizr")

    @Scripts.Render("~/bundles/angular")

    @Scripts.Render("~/bundles/jquery")

    @Scripts.Render("~/bundles/bootstrap")

    @RenderSection("scripts", required: false)

head>

 

After creating an AngularJS bundle and referring to it in _Layout.cshtml, you should be able to use AngularJS in the entire application.

 

Data from Database and in the AngularJS

So far we have seen how to set up AngularJS at a particular view level and the entire application level. Now let’s go ahead and create an end to end MVC application in which we will do the following tasks:

1.       Fetch data from the database using the EF database first approach

2.       Return JSON from the MVC controller

3.       Create an AngularJS service to fetch data using the $http

4.       Create an AngularJS controller

5.       Create an AngularJS view on the MVC view to display data in the table

Connect to a database using the EF database first approach

To connect to a database with the EF database-first approach, right click on the MVC application and select a new item. From the data tab, you need to select the option ADO.NET Entity Model as shown in the image below:

 

From the next screen, select the “EF Designer from database” option.

 

On the next screen, click on the New Connection option. To create a new connection to the database:

1.       Provide the database server name

2.       Choose the database from the drop down. Here we are working with the “School” database, so we’ve selected that from the drop down.

 

 

 

On the next screen, leave the default name of the connection string and click next.

 

On the next screen, select the tables and other entities you want to keep as the part of the model. To keep it simple I am using only the “Person” table in the model.

 

As of now we have created the connection with the database and a model has been added to the project. You should see an edmx file has been added as part of the project.

 

Return JSON from the MVC controller

To return the Person data as JSON, let us go ahead and add an action in the controller with the return type JsonResult. Keep in mind that you can easily write a Web API to return JSON data; however the purpose of this post is to show you how to work with AngularJS, so I’ll stick with the simplest option, which is creating an action which returns JSON data:

publicJsonResult GetPesrons()

        {

            SchoolEntities e = newSchoolEntities();

            var result = e.People.ToList();

            return Json(result, JsonRequestBehavior.AllowGet);

 

        }

 

Create an AngularJS service to fetch data using the $http

Here I assume that you already have some knowledge about these AngularJS terms, but here’s a quick review/intro of the key concepts:

Controller

A controller is the JavaScript constructor function which contains data and business logic. The controller and the view talk to each other using the $scope object. Each time a controller is used on the view, an instance gets created. So if we use it 10 times, 10 instances of the constructor will get created. 

Service

A service is the JavaScript function by which an instance gets created once per application life cycle. Anything shared across the controller should be part of the service. A service can be created in five different ways. The most popular way is by using the service method or the factory method. AngularJS provides many built in services also: for example, the $http service can be used to call a HTTP based service from an Angular app, but a service must be injected before it is used.

Modules

Modules are the JavaScript functions which contain other functions like a service or a controller. There should be at least one module per Angular app.

Note: These are the simplest definitions of these AngularJS concepts. You can find more in depth information on the web.

Now let’s start creating the module! First, right-click on the project and add a JavaScript file. You can call it anything you’d like, but in this example, let’s call it StudentClient.js.

In the StudentClient.js we have created a module and a simple controller. Later we will modify the controller to fetch the data from the MVC action.

var StudentApp = angular.module('StudentApp', [])

 

StudentApp.controller('StudentController', function ($scope) {

 

    $scope.message = "Infrgistics";

 

});

 

To use the module and the controller on the view, first you need to add the reference of the StudentClient.js and then set the value of ng-app directive to the module name StudentApp. Here’s how you do that:

@section scripts{

  

     <scriptsrc="~/StudentClient.js">script>

}

<divng-app="StudentApp"class="row">

    <divng-controller="StudentController">

        {{message}}

    div>

div>

 

At this point if you run the application, you will find Infragistics rendered on the view. Let’s start with creating the service. We will create the custom service using the factory method. In the service, using the $http in-built service will call the action method of the controller.  Here we’re putting the service in the same StudentService.js file.

StudentApp.factory('StudentService', ['$http', function ($http) {

 

    var StudentService = {};

    StudentService.getStudents = function () {

        return $http.get('/Home/GetPersons');

    };

    return StudentService;

 

}]); 

 

Once the service is created, next you need to create the controller. In the controller we will use the custom service and assign returned data to the $scope object. Let’s see how to create the controller in the code below:

StudentApp.controller('StudentController', function ($scope, StudentService) {

 

    getStudents();

    function getStudents() {

        StudentService.getStudents()

            .success(function (studs) {

                $scope.students = studs;

                console.log($scope.students);

            })

            .error(function (error) {

                $scope.status = 'Unable to load customer data: ' + error.message;

                console.log($scope.status);

            });

    }

});

 

Here we’ve created the controller, service, and module. Putting everything together, the StudentClient.js file should look like this:

var StudentApp = angular.module('StudentApp', []);

StudentApp.controller('StudentController', function ($scope, StudentService) {

 

    getStudents();

    function getStudents() {

        StudentService.getStudents()

            .success(function (studs) {

                $scope.students = studs;

                console.log($scope.students);

            })

            .error(function (error) {

                $scope.status = 'Unable to load customer data: ' + error.message;

                console.log($scope.status);

            });

    }

});

 

StudentApp.factory('StudentService', ['$http', function ($http) {

 

    var StudentService = {};

    StudentService.getStudents = function () {

        return $http.get('/Home/GetPersons');

    };

    return StudentService;

 

}]);

 

On the view, we can use the controller as shown below, but keep in mind that we are creating an AngularJS view on the Index.cshtml. The view can be created as shown below:

 

@section scripts{

  

     <scriptsrc="~/StudentClient.js">script>

}

<divng-app="StudentApp"class="container">

    <br/>

    <br/>

    <inputtype="text"placeholder="Search Student"ng-model="searchStudent"/>

    <br/>

    <divng-controller="StudentController">

        <tableclass="table">

            <trng-repeat="r in students | filter : searchStudent">

                <td>{{r.PersonID}}td>

                <td>{{r.FirstName}}td>

                <td>{{r.LastName}}td>

            tr>

        table>

    div>

div>

 

On the view, we are using ng-app, ng-controller, ng-repeat, and ng-model directives, along with “filter” to filter the table on the input entered in the textbox. Essentially these are the steps required to work with AngularJS in ASP.NET MVC application.

 

Conclusion

In this post we focused on a few simple but important steps to work with AngularJS and ASP.NET MVC together. We also touched upon the basic definitions of some key AngularJS components, the EF database-first approach, and MVC. In further posts we will go into more depth on these topics, but I hope this post will help you in getting started with AngularJS in ASP.NET MVC. Thanks for reading!

Top Enterprise Mobility Events in 2015

$
0
0

Paul Carter, CEO of Global Wireless Solutions, hit the nail on the head when he said “we are officially living in a mobile-first world”. The extent to which mobile plays a role in our lives is difficult to measure. Maybe the simplest way to gain an understanding of its magnitude is by asking yourself, ‘when was the last time I went a day without using my mobile?’

If you’re honest, it’s probably only a few hours at most. And while browsing social networks and checking emails may be the most popular and humblest form of interaction, the role that mobile plays in our lives has much more significance and substance. Did you see the latest news? Check CNN. Want to know how your stock is performing? Browse the markets. Forgot to get that important birthday present? ‘Click and collect’ will save the day.

Never has information been so readily available and easy to access. This not only relates to our personal lives but has a distinct impact on the enterprise and how we all work. We can prepare a presentation, edit a document, chat to colleagues remotely, save that new idea, set up a meeting and much more. At Infragistics we know the importance that mobile as a whole - be it apps, UX, platforms, wearables, design, testing etc. - has in today's’ world. As a result we like to keep up to speed with everything going on in our community.

And with events often leading the way with new ideas, latest news and innovative thinking, we wanted to share with you 4 enterprise mobility events that we think you should look out for in 2015.

The Mobile Show Middle East 2015

12 - 13 May, Dubai

One of the biggest events on our list, The Mobile Show Middle East is “Where leaders and pioneers of mobile technology meet to explore radical new ideas”. There’s a number of topics on the agenda, from ‘Apps and Content’, ‘Platforms and Devices’ to pure ‘Enterprise Mobility’ and ‘Infrastructure and Security’. Focusing ‘on everything the mobile industry needs to know’, this 2-day conference is aimed at developers, device manufacturers, Heads of, regulators, digital marketers, mobile consultants and more.

The stats are pretty impressive too. With over 10,000 attendees, 250 exhibitors, 100 VIPs from telcos, enterprise and government lined up and an estimated 300 facilitated buyer sessions, it’s sure to be a great event which aims to help those attending discover the latest in mobile solutions which can benefit their businesses. For more information check out their site.

Apps World, North America

12 - 13 May, San Francisco

The Apps World conference in California covers one of the largest growing industries - and one we know a lot about - mobile apps. Mobile usage has overtaken desktop usage and these numbers continue to rise. Known as a must attend conference for app developers, the event provides an opportunity to meet over 10,000 ‘leading developers, brands and industry professionals from across the entire app ecosystem’.

This event is huge and has a mighty impressive speaker line up. In fact you’d be hard pushed to find one better. From the Co-founder of Twitter, Chief Evangelist of Microsoft, Lead Android UI designer at Google, CEO of OneNote, Chief Digital Officer of the NFL and Senior Director of Nike (to name a few), attendees will hear from some of the very best that the industry has to offer.

The Enterprise Mobility Forum

14 - 15 May, South Africa

Taking place at the luxurious Arabella Hotel and Spa just outside of Cape Town, the Enterprise Mobility Forum is aimed at senior executives and decision makers and is strictly invite only. Attendees are treated to five themes over two days - ‘Managing and Securing the Mobile Enterprise’, ‘Aligning Strategies to Business Objectives’, ‘Mobile Applications, Platforms and Services’, ‘Sub-Saharan Africa: Connected and Mobile’ and ‘Enterprise Mobility: Looking to the Future’.

With Microsoft as the platinum sponsor you can expect to see and hear from a range of top level management from Barclays, Investec, Microsoft, the Johannesburg Stock Exchange, SAP, HP and more. Since its inaugural conference there’s been a consistent rise in forum attendees and leading vendors, highlighting that Africa’s premium enterprise mobility event is one to watch. And with the world's second largest continent playing a prominent role in this year’s 20 fastest growing economies, it’s set to keep growing.

Enterprise Mobility Management

18 June, London

EMM 2015 is the “UK’s leading enterprise mobility management event for business and technology professionals”. Now in its fourth year, the event will cover a whole host of hot topics from collaborative working and Mobile Applications Management (MAM) to wearable tech in the workplace and mobile big data. A particular focus this year will be on the increase of BYOD. As research has highlighted, ‘the BYOD market size is set to grow to over $284 billion by 2019’. It’s also estimated that by 2017 half of all employers will require employees to supply their own device for work purposes.

Featuring client use cases and case studies the emphasis is very much on real-life scenarios, sharing best practices and providing practical business advice. So if you’re a CEO, CIO, Director of, Enterprise Architect, BYOD Manager or Risk Analyst or Specialist, then this event in London is one for you.

Looking for a comprehensive and secure mobile Office 365 and SharePoint solution, which you can customize to your preferences? Look no further. Download our SharePlus Enterprise for iOS free demo now and see the wonders it can do for your team's productivity!

SharePlus for SharePoint - Download for Android

Top 10 features of VS in 2015

$
0
0

It’s no secret among developers that there is no better development environment than Microsoft Visual Studio. It offers the most complete set of tools to create powerful windows-, web- or any other application and can be done in almost any common language. Visual Studio is available in a version that fits every developer’s needs. Recently Microsoft announced the new Visual Studio 2015 product line, including the new Visual Studio Enterprise with MSDN, Visual Studio Professional with MSDN and the free Visual Studio Community edition.

The Visual Studio Community edition is a free version that has the same capabilities as the professional edition. Any developer can download this version and use it in an academic environment or in a team with no more than 5 developers.

In this post we will take a look at some of the top features in the newest edition of Visual Studio.

1. UI Debugging Tools for XAML

Visual Studio is often used to develop WPF applications and these applications are built with XAML. Two new tools have been added in the new version to inspect the visual tree of running WPF applications, as well as the properties on the elements in the tree. These tools are Live Visual Tree and Live Property Explorer. By using these tools you will be able to select any element and see the final, computed and rendered properties. In a future update these tools will also support Windows Store apps.

2. Single Sign-in

As developers today are using more and more cloud services like Azure for data storage, Visual Studio Online as code repository or the app store to publish the application, they needed to sign-in for each cloud service. In the latest release the authentication prompts are reduced and many cloud services will support single sign-on from now on, which is a much welcome feature!

3. CodeLens

CodeLens, an already existing tool in the previous versions, is used to find out more about your code while you keep working in the editor. The CTP 6 release enables CodeLens to visualize the code history of you C++, SQL or JavaScript files versioned in Git repositories by using the file-level indicators. When using work items in TFS the file-level indicators will also show the associated work items.

4. Code Map

Code Map is a tool that will visualize the specific dependencies in the application code. The tool enables you to navigate the relationships by using the map. This map helps the developer to keep track of their current position in the code while working. In addition to some performance improvements, there are some other new features in the Code Map tool such as filtering, external dependency links, improved top-down diagrams and link filtering.

5. Diagnostics Tools

In the new release the Diagnostic Tools debugger now supports 64-bit Windows Store apps and the timeline now zooms as necessary so the most recent break event is always visible.

6. JavaScript editor

JavaScript is the language of the future so in the CTP 6 release there are also a few improvements including:

 

  • Task list support. You can add a //TODO comment in your JavaScript code which will result in a new task being created in the task list.
  • Object literal IntelliSense. The JavaScript editor now has IntelliSense suggestions when passing an object literal to functions documented using JSDoc.

 

7. Unit tests

In the Visual Studio 2015 Preview version the Smart Unit Tests were introduced. This generated test data and a suite of unit tests by exploring the code. In the CTP 6 you can now take advantage of parameterized Unit Tests and Test stubs creation via the context menu.

8. Visual Studio Emulator for Android

As Visual Studio is no longer a tool used only for developing Windows applications, the CTP 6 version adds an improved emulator for Android with OpenGL ES, Android 5.0, Camera interaction and multi-touch support.

9. Visual Studio Tools for Apache Cordova

The latest release of Visual Studio not only offers support for debugging Android, iOS and Windows Store application but now adds the debugging support for Apache Cordova apps that target Windows Phone 8.1.

10. ASP.NET

The CTP 6 release add some new features and performance improvements for ASP.NET developers like:

 

  • Run and debug settings that can be customized by editing the debugSetting.json file
  • The ability to add a reference to a system assembly
  • Improved IntelliSense while editing project.json
  • A new Web API template
  • The ability to use PowerShell to publish the ASP.NET 5 application
  • Lambda expressions in the debugger watch windows

 

Continuous improvements

Here at Infragistics we’re constantly impressed by Visual Studio because of the continuous improvements and new features if offers to any developer. If you want to get stuck in and take a look at the new features and updates then you can start immediately by downloading the CTP 6 release here. Have fun!

If you are looking for the fastest grid on the market, this is your place. Download our Developer toolkit and test it now!


MVVM: Data Binding Rich Text to the Infragistics XamRichTextEditor

$
0
0

The Infragistics xamRichTextEditor control is a highly customizable rich text editing control that provides functionality modeled after the features and behavior of Microsoft Word.  You can easily create and edit Microsoft Word documents using the xamRichTextEditor.  Here’s the thing though… not every app that uses rich text uses Word, or even deals with a document at all.  Sometimes you just have a string stored in a database somewhere that holds all the rich text information as RTF or even HTML.  So, if you’re using MVVM, and populating a property with this string of rich text data, how do you data bind it to the xamRichTextEditor control?  Easy!  Use a document adapter.

Binding to Visual Elements

Let’s say that I have a xamRichTextEditor control and I want to data bind the rich text being generated by the control to another element in my view.  Let’s say a TextBlock.

<Grid>
    <Grid.ColumnDefinitions>
        <ColumnDefinition/>
        <ColumnDefinition/>
    </Grid.ColumnDefinitions>
    
    <ig:XamRichTextEditor x:Name="_rte" Grid.Column="0" />
    
    <TextBlock Grid.Column="1" />
</Grid>

Your first thought might be to data bind directly to a property of the xamRichTextEditor.  Well, you would be wrong.  We actually need to use a “middle man” called a document adapter.  Since the xamRichTextEditor supports PlainText, HTML, and RTF formats, you’ll want to choose which format you need.  Heck, you may want to support all of them.  That’s fine, no problem.  Either way, you need to add a reference to the document format you will be using.  I will be using HTML in this post.

xamRichTextEditor document formats

Once you have added the formats you need, we need can create a document adapter in XAML.  In this case, I’ll be using the HtmlDocumentAdapter.

      
      <ig:HtmlDocumentAdapter x:Name="_html" Document="{Binding ElementName=_rte, Path=Document}" />

As you can see, when I defined the HtmlDocumentAdapter, I data bound the Document property to the xamRichTextEditor.Document property.  This is how you make the connection between the two controls.  Now the next step is to data bind the Text property of the TextBlock in our sample to the HtmlDocumentAdapter so that we can visualize the HTML being generated as we create rich text in the xamRichTextEditor.

      
      <TextBlock Grid.Column="1" Text="{Binding ElementName=_html, Path=Value}" />

That’s it!  We are now data bound.  Run the application and let’s see what we get.

image

Perfect!  The HTML generated by the xamRichTextEditor is data bound and being rendered by the TextBlock control.  If you started typing into the xamRichTextEditor, you will notice that the HTML isn’t updated as you type.  This is because by default, the source doesn’t update until the control has lost focus.  Now, you may think, “oh, I’ll just use the UpdateSourceTrigger on the binding to have it update on any key stroke”.  Well, once again you, would be wrong!  You actually have to use a property that exists on the document adapter called RefreshTrigger.

image

You will notice four options; Delayed, ContentChanged, Explicit, and LostFocus.  Half of those are self-explanatory.  ContentChanged is like property changed.  It will update the Value every time the content in the xamRichTextEditor is updated.  For very large documents, this could cause some performance issues.  When using Delayed, you have two additional properties to help control the behavior of the update; DelayAfterFirstEdit and DelayAfterLastEdit.

DelayAfterFirstEdit is a timespan that allows you to define how long to wait after you first start typing in the xamRichTextEditor to update the binding.

DelayAfterLastEdit is a time span that allows you to define how long to wait after you stop typing in the xamRichTextEditor to update the binding.

You can even use them together if you like.


<ig:HtmlDocumentAdapter x:Name="_html" Document="{Binding ElementName=_rte, Path=Document}"
                        RefreshTrigger="Delayed"
                        DelayAfterFirstEdit="00:00:02:00"
                        DelayAfterLastEdit="00:00:02:00" />

Here is our final XAML to create the data binding between the xamRichTextEditor, and the TextBlock.

<Grid>
    <Grid.ColumnDefinitions>
        <ColumnDefinition/>
        <ColumnDefinition/>
    </Grid.ColumnDefinitions>
    
    <ig:HtmlDocumentAdapter x:Name="_html" Document="{Binding ElementName=_rte, Path=Document}"
                            RefreshTrigger="Delayed"
                            DelayAfterFirstEdit="00:00:02:00"
                            DelayAfterLastEdit="00:00:02:00" />
    
    <ig:XamRichTextEditor x:Name="_rte" Grid.Column="0" />
    
    <TextBlock Grid.Column="1" Text="{Binding ElementName=_html, Path=Value}" />
</Grid>

 

Binding to a Property in a ViewModel

So what if we want to data bind to a property in our ViewModel.  We are using MVVM after all!  Well that would require just a slight modification.  Let’s say I have a ViewModel that looks like this:

publicclassMainWindowViewModel : INotifyPropertyChanged
{
    privatestring _htmlText;
    publicstring HtmlText
    {
        get { return _htmlText; }
        set
        {
            _htmlText = value;
            OnPropertyChanged();
        }
    }

    publiceventPropertyChangedEventHandler PropertyChanged;
    protectedvirtualvoid OnPropertyChanged([CallerMemberName] string propertyName = null)
    {
        var handler = PropertyChanged;
        if (handler != null) handler(this, newPropertyChangedEventArgs(propertyName));
    }
}

Well, with a small modification to our XAML, we can create a binding between the xamRichTextEditor, and the property on underlying ViewModel.

<ig:HtmlDocumentAdapter x:Name="_html" Document="{Binding ElementName=_rte, Path=Document}"
                        RefreshTrigger="Delayed"
                        DelayAfterFirstEdit="00:00:02:00"
                        DelayAfterLastEdit="00:00:02:00"
                        Value="{Binding HtmlText}"/>

Assuming your View’s DataContext is properly set, like mine is; this would now update your property depending on your RefreshTrigger settings.  You can now serialize this property any way you like.

Be sure to check out the source code, and start playing with it.  As always, feel free contact me on my blog, connect with me on Twitter (@brianlagunas), or leave a comment below for any questions or comments you may have.

Improving Your Craft with Static Analysis

$
0
0

These days, I make part of my living doing what's called "software craftsmanship coaching."  Loosely described, this means that I spend time with teams, helping them develop and sustain ways to write cleaner code.  It involves introduction to things like the SOLID Principles, design patterns, DRY code, pair programming, and, of course, automated testing and test driven development (TDD).  I've spent a lot of time contemplating these subjects and their economic value to organizations, even up to the point of creating a course for Pluralsight.com about this very thing.  And through this contemplation, I've come to realize that TDD is an extraordinarily nuanced practice, both in terms of advantages offered and challenges presented.

This post is not about TDD, so what I'd like to do is zoom in on one particular benefit offered by the practice.  It's a benefit that tends to be overlooked beside the regression suite that it generates and the loosely coupled design that it encourages.  But one of the important things that TDD does is to provide a very tight, automated feedback loop.  Consider what generally happens if you're working on a web application and you want to evaluate the effects of your most recent changes to the code base.  You build the code and then run it, and running it is generally accomplished by deploying it to some local version of a web server and then starting the web server.  Once the web server and your web application are running, you then engage the GUI and navigate to wherever it is that will trigger your code to be run.  Only at this point do you get feedback about what you've done.  TDD short-circuits this process by requiring only build and execution of a test suite.

Of course, TDD isn't the only way to create a tight feedback loop, but it is a well-recognized one.  And it's also one that tends to spoil you.  After becoming used to TDD, it's hard to go back to waiting for long cycle times between writing code and seeing the results.  In fact, it tends to go the other way and you find yourself chasing other means of obtaining fast, automated feedback.  It was this exact dynamic that got me hooked on the idea of static code analysis.  If I could get quick feedback from unit tests about whether my code worked, why couldn't I get feedback about whether it was well written?

A Code Quality Feedback Loop?

Now, "well written" inherently invites a great deal of subjectivity, and it's not as though there is any universal agreement, even in a given language, as to what properties of code are ideal.  But there are some pretty well established trends that get pretty wide agreement.  It is preferable not to write classes and methods that are overly large or complex.  It is preferable not to create modules that are too tightly coupled or needlessly interdependent.  And, speaking of dependencies, it's better not to create cycles.  It's pretty easy to argue that inheritance hierarchies shouldn't be too deep, method parameter rosters shouldn't be too long, and classes shouldn't be too overrun with methods.

But factoring all of these things and more into the mix, it gets sort of hard to keep track of it all.  I mean, it's easy enough to be in the middle of some monster 4000 line method and think, "man, this method is waaay too big," but it can be harder to notice when you're adding a few lines to a method that may already be marginally too long.  After all, it's not necessarily at the forefront of your mind since you're probably in their chasing some infuriating bug.

Before giving up hope, though, consider things with which you may be more familiar, such as test coverage tools and compiler warnings.  You can deliver code with minimal test coverage or even with boatloads of compiler warnings, but there's a nagging pull not to so.  Call it gamification or perfectionism or whatever you like, but it's there, even if you don't always obey it.  There's a pressure to fix these issues because they're constantly there, in your face.  They're part of a pretty tight feedback loop for you.

So I encourage you to add static analysis tools into your feedback loop.  I'm not really talking about the kinds of tools that alert you if you're not following the team's coding standards (go nuts with this if you want).  Rather, I'm referring to the kinds of tools that show you things about your code like line count in methods, cyclomatic complexity, number of methods in a class, and class cohesion.  Set up tools that warn you when these things are running afoul of what they generally look like in "clean code."

What you're going to get out of this is not the bullet-proof, "one true way" to do things.  Life isn't that simple, people who tell you it is are selling you a false bill of goods.  What you're going to get out of it is a growing understanding of architectural tradeoffs buried within the code that you write.  The static analysis tool serves the same purpose as the rumble strips on highways by jolting you whenever you're venturing beyond what may be considered standard usage.  Sure, there might be reasons to veer onto the shoulder in certain odd circumstances, but usually you've just drifted over there due to inattentiveness.  Well, not anymore you won't.

If you're skeptical, just install such a tool and see what you think.  See what it says about your code, but don't take any action one way or another if you're not comfortable with it.  If you disagree with it, do some research and try to formulate an argument as to why.  I'm not advocating that you revisit all of your programming decisions to achieve a number that some tool says you should have.  I'm advocating that you make yourself aware of these numbers and the concepts that drive them so that you can have intelligent conversations about them and make informed decisions.  And I'm advocating that you do this with a fast feedback loop, safely in the comfort of your own IDE.

The quick feedback here is the best part of all.  The static analysis tools are just executed algorithms.  You're not submitting to peers for a code review or putting your code on the internet and being blasted by mean-spirited trolls.  You're just helping yourself to some automated feedback with the understanding that you can keep helping yourself to it whenever you want.  After enough time with this approach, you'll be prepared for the arguments that actual trolls and critics might offer up.  And, hey, you might just learn some things and change some habits in ways that make you happy.

Developer News - What's IN with the Infragistics Community? (5/11-5/17)

Objects in JavaScript for .NET developers – Part 1

$
0
0

 

Here are some fun facts for you: JavaScript is not an object oriented language, but almost everything in JavaScript is an object. JavaScript does not have classes, and we can create an object from an object. A function can be used as a constructor, and returns a newly created object. Every object in JavaScript contains a second object called a prototype object.

If you’re coming from a .NET background, the sentences you just read probably don’t make any sense. But these are all true statements about JavaScript. And in this post we will focus on different ways to create objects in BLOCKED SCRIPT

1.       Object as literal

2.       Creating an object using the new operator and constructors

3.       Creating an object using the Object.create() static method

 

Object creation as literal

The simplest way to create an object is by creating an object using the object literal.  We can create a simple object as shown in the listing below:

 

var foo = {};

foo.prop = "noo";

console.log(foo.prop);

var rectangle = { height: 20, width: 30 };

console.log(rectangle.height);

rectangle.height = 30;

console.log(rectangle.height);

 

In the above listing, we have created two objects:

1.       Object foo does not contain any properties

2.       Object rectangle contains two properties: height and width.

3.       Properties can be added to an object after creation too. In the above listing, when object foo was created it did not have any properties, so we added a property named “prop” in the foo object.

4.       The value of the properties can be modified after the object creation. In the above listing we modified the height property.

We can create a complex object as the object literal as well. Let us say we want to create a student object. This should contain the following properties:

1.       Name

2.       Age

3.       Subject

4.       Parents – another object literal with its own properties like name and age.

The complex student object can be created as shown:

var student = {

    name: "David",

    age: 20,

    parents: {

        name: 'Mark',

        age: 58

    }

};

 

var studentparentage = student.parents.age;

console.log(studentparentage);

 

As you notice in the above listing, the parents property is an object itself with its own properties. We can add, remove, and access the properties of the object property in the same way we would with the object.

A single object literal creates as many new objects as it appears or being used in the code. To understand this better, let’s take a look at this code snippet:

var fooarr = [];

for (i = 0; i < 10; i++) {

 

    var foo = { val: i };

    fooarr.push(foo);

    console.log(fooarr[i].val);

 

}

console.log(fooarr[3].val);

 

You will notice here that we are creating the object literals inside a loop and pushing the created object to an array. In the above listing, 10 objects were created, demonstrating that a single object literal can create many new objects if it is inside a loop body or being used repeatedly. In the above code listing, the object literal “foo” is repeating 10 times inside the loop and it is creating 10 new objects. To verify this further we are accessing the 4th object created using the object literal outside the loop.

We need to keep this in mind while working with object literals: one single object literal can create as many new objects as number of times it is used.

 

Creating an object using the new operator or constructor pattern

In the JavaScript we can create the object using the new operator also. When we create an object using the new operator, it is also known as the constructor pattern. When we create an object using the new operator then the new keyword must be followed by a function invocation. In this case function is works as the constructor. In object oriented world constructor is a function used to construct an object. So the invoked function after the new keyword serve as the constructor which constructs the object and returns the constructed object.

Keep in mind that JavaScript does not have classes (up to ECMA 5.0), but it supports special functions called constructors. Just by calling a function after the new operator, we request function to work as a constructor and returns the newly created object. Inside the constructor current object is referred by the this keyword.

To understand it better, let us consider the following listing,

function Rectangle(height, width) {

    this.height = height;

    this.width = width;

 

    this.area = function () {

        returnthis.height * this.width;

    };

};

 

var rec1 = new Rectangle(45, 6);

var rec2 = new Rectangle(8, 7);

var rec1area = rec1.area();

console.log(rec1area);

var rec2area = rec2.area();

console.log(rec2area);

 

In the above listing,

1.       We created a rectangle function

2.       Created object using the new keyword.

3.       Invoked rectangle function is called after the new keyword, hence it worked as constructor

4.       Rectangle constructor returned the created object.

5.       Object is referred with this keyword inside the constructor.

If we call rectangle function without the new operator, it will work as normal JavaScript function. Whereas if we call rectangle function after the new operator, it will work as constructor and return the created object.

Everything is good about the above code but with one problem that the area function is redefined for all the objects. Certainly we do not want this and the area function should be shared among the objects.

 

Object Prototypes

All the objects such as functions in JavaScript contain a prototype object. When we use function as constructor to create object, properties of prototype object get available to the newly created objects.  We can solve the above problem of area function getting redefined using the prototype object of the constructor.

 

function Rectangle(height, width) {

    this.height = height;

    this.width = width;

}

 

Rectangle.prototype.area = function () {

    returnthis.height * this.width;

};

var rec1 = new Rectangle(45, 6);

var rec2 = new Rectangle(8, 7);

var rec1area = rec1.area();

console.log(rec1area);

var rec2area = rec2.area();

console.log(rec2area);

 

In above listing we are creating the area function as the property of the Rectangle prototype. Hence it will be available to all new objects without getting redefined.

Keep in mind that every JavaScript object has a second object associated with it called prototype object. Always the first objects inherits the properties of the prototype object.

 

Object creation using Object.create()

The Object.create() static method was introduced in ECMA Script 5.0. It is used to construct new object. So using the Object.create() a new object can be created as shown in below listing,

 

var foo = Object.create(Object.prototype,

       { name: { value: 'koo' } });

console.log(foo.name);

 

Some important points about Object.create() to remember:

1.       This method takes two arguments:

a.  The first argument is the prototype of the object to be created, and is the required argument

2.       The second arguments is the optional argument, and describes new properties of the newly created object

3.       The first argument can be null, but in that case the new object will not inherit any properties

4.       To create an empty object, you must pass the Object.prototype as the first argument

Let’s say you have an existing object called foo and you want to use foo as a prototype for a new object called koo with the added property of “food”. You can do so by doing this:

 

var foo = {

 

    name: 'steve',

    age: 30

};

 

 

var koo = Object.create(foo,

       { subject: { value: 'koo' } });

 

console.log(koo.name);

console.log(koo.subject);

 

In the above listing, we have an object named foo, and we’re using foo as the prototype of the object named koo. Koo will inherit the properties of foo and it will have its own additional properties also.

 

Conclusion

There are a few different ways to create objects in JavaScript, and in this post we focused on three of them. Stay tuned for the second part of this post where we will focus on:

·         Inheritance

·         Object Properties

·         Properties getters and setters

·         Enumerating Properties Etc.

I hope you find my posts useful - thanks for reading, and happy coding!

 

Simplifying the JavaScript Callback function for .NET developers

$
0
0

 In JavaScript, functions are objects, and they can:

·         Be passed as an argument to another function

·         Return as a value from a function

·         Be assigned to a variable

Let’s assume that you have a JavaScript function (let’s call it function A) with the following properties:

1.       Function A takes another function (let’s call this one function CB) as one of the parameters.

2.       Function A executes the function CB in its body.

In the above scenario, function CB is known as the Callback function. Let’s learn more about it using the following code:

 

function A(param1, param2, CB) {

 

    var result = param1 + param2;

    console.log(result);

    CB(result);

 

}

 

Here we’ve created a function A , which takes three parameters. You will notice that last parameter - CB - is a function, which is being called inside the body of function A. Next we’ll call function A as shown the below listing:

 

function CallBackFunction(result) {

    console.log(result + ' in the CallBack function');

}

 

A(5, 7, CallBackFunction);

 

Here we’ve created a function named CallBackFunction (You can name it whatever you’d like) and we’ve passed it as the third parameter in function A. In its body, function A is executing the passed CallBackFunction.

Another way to pass a Callback function is the anonymous function. See the example here:

 

A(5, 7, function (result) {

 

    console.log(result + ' in the CallBack function');

});

 

How does the Callback function work?

In the called function, we pass the definition of the callback function. Let’s consider the example we took above:

1.       In function A, we are passing the definition of the callback function

2.       Function A has information about the callback function definition

3.       Function A calls the callback function in its body

4.       While calling function A, we pass the callback function

5.       The callback function can be either named or an anonymous function

 

Optional callback function

What would happen if we don’t pass a third parameter (i.e. a callback function) in function A? In that case, an exception will be thrown stating that “undefined” is not a function. A JavaScript function may take more or less arguments. When we call a JavaScript function with less parameters, “undefined” gets passed for the parameters which are not passed. So in the above scenario for function A, when we don’t pass the third argument, undefined gets passed and we get the exception that undefined is not a function.

We need to be sure about following three points while creating a callback function:

1.       Make sure the callback function is passed

2.       If the callback function is not passed, then handle the exception

3.       Ensure that a callback only function is passed, not any literal or other kind of object

We can implement the points above in the snippet below:

 

function A(param1, param2, CB) {

 

    var result = param1 + param2;

    console.log(result);

    if (CB !== undefined && typeof (CB) === "function") {

        CB(result);

    }

 

}

 

In the above listing, we are checking:

1.       Whether the value of CB is undefined or not

2.       Whether the type of CB is a function or not

By checking the two points above, we can make the callback function optional.

 

Callback with asynchronous call

In JavaScript, sometimes you might be required to work with asynchronous methods, for example when:

1.       Reading or writing a file system

2.       Calling web services

3.       Making an AJAX call, etc

The tasks mentioned above can take time and block the execution. While reading from the file system or making an AJAX call, you don’t want to wait. You’ll want to perform following operations as asynchronous. We can use a callback function here to handle the asynchronous operation, so the callback function will be executed when the asynchronous call is completed.

Let’s say that you need to consume service to fetch the data. Without AJAX that can be done as shown in the listing below:

 

getData('serviceurl', writeData);

 

function getData(serviceurl, callback) {

    //service call to bring data

    var dataArray = [123, 456, 789, 012, 345, 678];

    callback(dataArray);

}

 

function writeData(myData) {

    console.log(myData);

}

 

In the AJAX call we can use callback function as shown in the listing below:

 

function GetUser(serviceurl, callback) {

    var request = new XMLHttpRequest();

    request.onreadystatechange = function () // can replace this with callback

    {

        if (request.readyState === 4 && request.status === 200) {

            callback(request.responseText); // using the callback

        }

    };

    request.open('GET', serviceurl);

    //req.setRequestHeader('X-Requested-With', '*');

    request.send(null);

}

 

function DisplayData(data) {

    console.log(data);

}

 

GetUser('serviceurl', DisplayData);

 

As you see here, we’re using the callback function to print the data. We can even replace the function called on onreadystate with the reusable callback function.

That’s about all I’ve got for this post about the callback function – I hope you find it useful, thanks for reading!

Application Migration Webinar Series

$
0
0

In our 15.1 Launch Webinar Series, our experts in design, prototyping, and development walk you through how to build your users what they want: beautiful, high-powered Web apps with greater reach, easier deployment, and lower total cost of ownership.

We've collected the recordings of these presentations here for you in one location, so feel free to pass them along to anyone else you think may benefit from them as well!

Webinar 1: Beyond Responsive Design (Kevin Richardson)

[youtube] width="560" height="315" src="http://www.youtube.com/embed/GzqQX61wwiw" [/youtube]

When faced with the challenge of migrating a desktop application to a web-based environment, product teams often consider only the obvious technical challenges associated with a browser-based display. Some teams, more in-tune with their users’ mobile habits, spend valuable resources crafting a programmatic, responsive  solution that will reformat the application to fit the desired device. Neither of these approaches will give users what they need. You need to go beyond responsive and provide the right functionality at the right time - and Kevin will show you how.

Webinar 2: Prototyping to Manage Change (George Abraham)

[youtube] width="560" height="315" src="http://www.youtube.com/embed/XLVetZhmKMw" [/youtube]

Whether you are migrating your application from desktop to web or creating a native mobile app for an existing desktop application, there is going to be change. Your business, your users, and UI patterns will evolve. But while change is inevitable, prototyping is one way to manage this change. In this webinar, we'll show you how to use Indigo Studio, Infragistics rapid prototyping tool, to

  • Create goal/story-driven prototypes;
  • Create and share custom UI libraries with teams;
  • Share your prototypes with your users and stakeholders without requiring them to install anything - and more

 

Webinar 3: Migrating from Desktop to Web with Ignite UI (Ambrose Little)

[youtube] width="560" height="315" src="http://www.youtube.com/embed/t1OSsY4LuhY" [/youtube]

So you’re thinking about migrating your existing line-of-business desktop app to modern Web technologies and are not sure what your options are? We’re here to help. Check out this webinar and see how Infragistics Ignite UI can make you more productive building usable, reliable line-of-business applications for the modern Web based on existing desktop solutions. You’ll learn about two common best practice approaches for modern Web development—Single Page Applications backed by Web services vs ASP.NET MVC—and how Ignite UI helps you build for both of them, depending on which makes the most sense for your current expertise and future architectural needs.

Whether you're just starting out or you're up to your ears in your latest app migration, these presentations will help you make sense of the chaos. Check them out today - and don't forget to make the most of your webinar experience by downloading your free trial of Infragistics Ultimate by clicking the banner below!

The Social Side of SharePoint

$
0
0

Social networks have been available for personal use for a long time - indeed their roots can be traced back to the 90’s and early 2000’s with sites like Friendster and MySpace. However, their appearance in the business context took a little longer and is a relatively new phenomenon. So-called “Enterprise Social Networks” are increasingly common nowadays in organizations - employees no longer see them as optional, with many using their social features to do their day-to-day work. Enterprises are responding to these needs more and more and are delivering ESNs to their users.

Social Tools in SharePoint

When looking at SharePoint, social features were introduced first in SharePoint 2007 with MySites. However, functionality to engage with colleagues and external customers was limited and it never really took off. With the arrival of SharePoint 2010 and SharePoint 2013, MySites had been improved a lot, when new features allowing users to follow sites, people, and documents, engage with other users and ‘news feeds’ were introduced too.

In 2012 Microsoft announced that they had acquired Yammer - at that time a leading provider of enterprise social networks - which was integrated with SharePoint 2013 and subsequently Office 365. In today’s post we’ll take a look at all the social features currently available, but also at the roadmap to see what features will be added over the next couple of months.

Yammer - the latest features and developments

Yammer integrates nicely with SharePoint and Office 365, but is also still a separate platform. Some examples of this integration currently implemented or being rolled out include:

  • Document Conversations: it is possible to start a conversation on Yammer, straight from a document opened in Office.
  • Yammer Embed: Embed a Yammer feed anywhere in Office 365, to display the latest conversations and interactions for a user or a group.
  • Delve: Delve shows documents stored in Office 365, as well as conversations in Yammer.


Looking at the Office 365 roadmap, we can see that Microsoft is working on some nice new features for Yammer:

  • Add external collaborators to your internal Yammer conversations: Discuss topics with your external customers, vendors, etc.
  • Office Online support: Open and edit documents in Yammer, using Office Online (Word, Excel, PowerPoint, etc.)

Document Conversations: Make it easier to start a discussion

Very often, a user working on a document will want to discuss it with one or more colleagues. That user would probably go to Yammer, post a link to the document, and start a discussion. This will now be a lot easier! As stated before, Microsoft is currently rolling out a feature that enables users to start a Yammer discussion directly from a document. This results in a very smooth integration between document management and collaboration in Office 365 and discussion in Yammer which will be a marked improvement.

Groups - work like a team

Office 365 Groups is the newest way to create a team and work together on documents, start conversations, publish schedules and work collaboratively. It was introduced in September 2014 and Microsoft keeps adding more and more features.

It can be seen as a replacement for old-fashioned teamsites in SharePoint. Instead of creating a site and adding users to the site, you now create a group. Documents and people can be added to that group a lot easier. And, when a user is in one or more Groups, he or she will see active or updated documents in Office Delve. A big announcement at Ignite was the deeper integration between Yammer and Groups which will allow teams to “seamlessly move between Yammer conversations, meetings in Skype for Business, Outlook email, files in OneDrive and content discovery in Delve.”

Delve - the nextgen start page for your documents

The Office Graph (which is based on the Yammer Enterprise Graph) implements machine learning in Office 365. It extracts all activity in Office 365, calculates what would be important for a user, and allows apps like Delve to envision this data. When one of your colleagues works on a document, this will be visible in Delve. Popular documents in an employee’s network will also be shown, as they are likely to be relevant.

The biggest new feature in Delve are the so-called Boards. A lot of sessions at Ignite, the most recent event from Microsoft, discussed this new feature. It allows people to group information in Delve based on a topic. People can subscribe to a board, receive suggestions for other boards, can share boards with other people in the organisation, and so on.

Discuss your projects with your customers

As discussed earlier, Microsoft is working on opening Yammer for external collaborators. Right now, it’s not possible to share Yammer with external users. By implementing the ESN externally, you will be able to share certain discussions with your customers for example. This will make them feel more engaged, because they are able to participate in discussions. This will certainly improve customer satisfaction.

Social Networks are important for your business

Sharing knowledge has always been a pain point for organizations. With the introduction of social networks, it is easier for people to share their knowledge with co-workers. It will also make them feel better, because they are able to receive feedback on the knowledge and input in discussions they are sharing. Microsoft SharePoint and Office 365 offer a wide range of social features, and this will only get better. Social features in Delve are becoming increasingly important, and if we’re to believe the hype, Delve will be the biggest workplace collaboration invention since SharePoint!

Looking for a comprehensive and secure mobile Office 365 and SharePoint solution? Look no further. Download our SharePlus Enterprise for iOS free demo now and see the wonders it can do for your team's collaboration and productivity on the go!

SharePlus - Your Mobile SharePoint Solution



11 things about JavaScript functions every .NET developer should know: Webinar Recap

$
0
0

We recently held a webinar for the Indian region on March 27th, titled “11 things about JavaScript function every .NET developer should know”.

The presentation was attended by a good number of developers from the Indian region, and I covered a range of topics, including:

  • Functions as a statement
  • Functions as an expression
  • Return statements
  • Arguments in Javacript functions
  • JavaScript functions as constructors
  • Callback functions, and more

[youtube] width="560" height="315" src="http://www.youtube.com/embed/IW-At99e03g" [/youtube]

Many questions were asked during the webinar, and while we tried to answer all of them, we may have missed some, so here are some of the important questions followed by our answers:

What is function as expression and statement?

When we create a function and assign that to a variable then this is known as “function as an expression”. Whereas a function created with the first word function is known as a function statement. A function statement gets hoisted at the top of the scope.

Which editor you are using?

We are using Sublime text 2 and running JavaScript using NodeJS.

What function as constructor returns?

If we invoke a function after the new keyword, then function acts as a constructor and returns the created object. In the function, this represents the current object.

How do parameters work in JavaScript functions?

  • do not check for the type of parameters
  • do not check for the number of parameters
  • along with parameters arguments object, an array like object
  • arguments object length property gives us the number of parameters passed while invoking the function
  • We can pass more or less parameters while invoking a function

How do return statements work in JavaScript functions?

  • A JavaScript function may or may not have the return statement.
  • A Function without a return statement returns undefined
  • A Function with only a return statement returns undefined
  • A Function with a return statement along with an expression returns the evaluated value of the expression
  • A Function constructor returns a newly created object

What is a JavaScript callback function?

JavaScript callback functions have two attributes:

  1. They get passed as parameter to a function
  2. They get inside body of a function

Once again, thank you so much for your interest in our webinars – and we look forward to seeing you at a future presentation!

Delegating Is Not Just for Managers

$
0
0

I remember most the tiredness that would come and stick around through the next day.  After late nights where the effort had been successful, the tiredness was kind of a companion that had accompanied me through battle.  After late nights of futility, it was a taunting adversary that wouldn’t go away.  But whatever came, there was always tiredness.

I have a personality quirk that probably explains whatever success I’ve enjoyed as well as my frequent tiredness.  I am a relentless DIY-er and inveterate tinkerer.  In my life I’ve figured out and become serviceable at things ranging from home improvement to cooking to technology.  This relentless quest toward complete understanding back to first principles has given me a lot of knowledge, practice, and drive; staying up late re-assembling a garbage disposal when others might have called a handyman is the sort of behavior that’s helped me advance myself and my career.  On a long timeline, I’ll figure the problem out, whatever it is, out of a stubborn refusal to be defeated and/or a desire to know and understand more.

And so, throughout my career, I’ve labored on things long after I should have gone to bed.  I’ve gotten 3 hours of sleep because I refused to go to bed before hacking some Linux driver to work with a wireless networking USB dongle that I had.  I’ve stayed up late doing passion projects, tracking down bugs, and everything in between.  And wheels, oh, how I’ve re-invented them.  It’s not so much that I suffered from “Not Invented Here” syndrome, but that I wanted the practice, satisfaction, and knowledge that accompanied doing it myself.  I did these things for the same reason that I learned to cook or fix things around the house: I could pay someone else, but why do that when I’m sure I could figure it out myself?

In more recent years, I’ve revisited this practice.  I’m in business for myself now and absolutely maxed out with demands for my time.  I coach software development teams.  I make videos for Pluralsight.com.   I blog 3 times per week and sometimes more.  And I still try to find time to write code and do some application development work when I can.  Juggling all of these things has caused me to economize for time in all possible ways, and I’ve read books like Getting Things Done, The 4-Hour Work Week, and The Lean Startup for ideas on how better to manage my time.

A consistent theme of being more productive and more successful is to be selective about what you do versus what you rely on others to do.  I could spend 4 hours wrangling the garbage disposal, anticipating the satisfied tiredness the next day when I finally emerged victorious… or, I could spend those 4 hours as billable ones, coaching a software team, and hire someone better than me at fixing disposals to come and fix the disposal.  It’s hard to ask someone to help you for a task that you know you could do yourself, but coping with and overcoming that difficulty is the stuff leadership is made of.  It’s the stuff success is made of.  People who become tech leads and architects and do well in these roles are those who learn and understand this lesson.

The closer the task to your area of expertise, the harder it becomes to apply this lesson.  It’s one thing for me to hire people to do plumbing tasks, but quite another for me to hire someone to improve the performance of my website or build me a Nuget package.  And yet, I have to because I simply don’t have time not to.  The field of software development is growing exponentially more specialized, which means that I need to learn the lesson, “just because it’s code doesn’t mean I’m the person for the job.”

It’s in this context that I appreciate the work done by Infragistics.  I can’t tell you how many times I’ve implemented some kind of grid in some GUI somewhere and hand-rolled logic for sorting and filtration.  I can’t tell you how many times I’ve thought to myself, “okay, next up, using some kind of text box template so that the user can click and edit inline.”  This may have made me a better programmer through practice, but it was not a valuable use of my time.  Practice and learning are activities unto themselves, and it’s important to set aside time to do them and to come to understand the problem being solved by the tools that you use, but when you’re on the clock and getting things done, you should not be solving those solved problems.  Let experts solve them for you while you solve business problems.

It took me years to learn this lesson and then to start applying it.  Learn from my mistakes.  Let the experts in their areas help you so that you can find, build, and profit from your own area of expertise.

UXify Bulgaria 2015 - 2 Days of UX Inspiration, Networking and Practical Experience

$
0
0

For the second time in Bulgaria, Infragistics is organizing UXify Bulgaria– a two-day community event, bringing to you the best of the UX design world. UXify will kick off on Friday, June 19th with a full day of inspiring conference seminars, followed by a day of workshops that will give you practical hands-on experience.

UXify Bulgaria 2015

Come hear the thought leaders in User Experience and Customer-Centered Design and get ready to learn from the latest insights and trends in the fields of User Research, Dashboard Design, UX in Front-End Development, Gamification, and more!

This event is an excellent opportunity for User Experience Designers, UX Architects, Visual Designers, Product Managers, developers, and anyone interested in the topic to network, share your ideas, and be inspired by the new trends in UX internationally.

Here is a sneak peak of what you’ll learn at UXify Bulgaria’s 2015 edition:

  • User Research in the Wild”, or how to understand your users by observing them in their natural habitat – by Jim Ross, Senior UX Architect at Infragistics.  
  • Gamification – a Player Centered Design Process” - In which cases is it possible to gamify UX and which are the good and bad practices when integrating gamification during UX design? Stefan Ivanov, UX Architect at Infragistics will explore this topic.
  • UX and Front-end Development”, or how UX thinking should be practiced by everyone in an organization, especially by front-end developers, in order to deliver exceptional user experiences in any digital product.
  • Dashboard Design” - How do you turn data into information that is relevant for decision making and that can be communicated effectively? Infragistics’ Senior Director of Design, Tobias Komischke, will present the best practices for dashboard design.
  • And more UX talks and workshops from leading industry experts from SAP, VMWare, SoftServe, Mentormate, Despark.

Where?

Sofia Event Center, Bulgaria

When?

Jun 19th-20th 2015

We hope you’ll join us! Simply visit http://uxify.net/ to register.

Can't make it to the event? Follow us at #uxify or like our Facebook page.

At Infragistics, we believe that great apps happen by design. Let your users experience your next web or mobile app before you code it with our rapid prototyping tool, Indigo Studio!

//BUILD/ 2015 Event Recap

$
0
0

One of the greatest parts of my job here at Infragistics is being able to organize and attend our events, and one of my favorites is always //BUILD/. This year’s event was again in the Moscone West Conference and Convention Center in San Francisco, CA, and it was held April 28-May 1st. Infragistics' 2015 Booth was #313, so we were again adjacent to the keynote room on the 3rd floor of the Moscone. Our event team this year was Alan Halama, Jason Beres, Brian Lagunas, Ken Azuma, and myself. A pretty all-star team in my personal opinion!

The //BUILD/ event was a huge success, beginning with the debut of Infragistics' Control Freak shirt giveaways. A lot of the team’s time was spent demonstrating a great cross-section of our tools, but by far the most popular demo was our new Xamarin.Forms controls. Due to Microsoft’s announcements, we also fielded a lot of questions about Windows Universal App and how that will impact our entire industry.

  

In addition to the standard conference, this year at //BUILD/ and in conjunction with the Microsoft MVP Program, Infragistics hosted a Microsoft MVP, invitation only, networking and happy hour. This was on the first evening of the conference at an off-site venue, the “Thirsty Bear” brewery. There were tons of MVPs there who all had a chance to pick up some of our custom party SWAG!

Additionally, as Infragistics is a member of the Visual Sudio Industry Partner Program (VSIP), we were encouraged to attend an invitation-only event of our own, and the event team went to the VSIP-only Mixology event on the second night of the show, where we got to talk to other partners and increase our ecosystem presence.

Finally, there were two additional highlights that I would like to mention. One is that MSDN magazine was able to interview the team in person, which was a great experience. ADDITIONALLY Brian was interviewed on-site at //BUILD/ by Channel9. That interview is live now, and you can check it out here.

Overall and as always, Build 2015 was a successful event for Infragistics as we got to stomp the ground out there with all of you! The team had a great time, and of course I did personally as well! Can't wait for the next one!!

Visual Explorations of Sample Size

$
0
0


Drawing conclusion based on small samples is obviously problematic. At the same time, I also wonder whether the rise to prominence of "Big Data" can lead organisations to blindly collect as much data as possible rather than think logically about how much data is actually necessary to perform whatever analysis tasks are required. I'd rather have a bit more data than necessary than not quite enough, but that doesn't mean we should be collecting everything just because we can. We can use statistics to guide us as to how much data we really need, but I recently got to thinking about how we can visually show what effect increasing the sample size has.

To keep things simple I'll just look at the effect of increasing the sample size with random variates from a specific (but rather arbitrary) instance of the normal distribution. I will leave stating the parameters - the true mean and true standard deviation - till later.

The animated gif below shows probability density histograms made from sampling the aforementioned normal distribution. From frame to frame the sample size increase by a factor of ten and the data used to draw each histogram is a superset of the data in the previous frame. The red curve is the normal distribution with the same mean and standard deviation as the sample data.

Clearly, with a sample size of just ten, the empirical distribution looks nothing like the normal distribution with the same mean and standard deviation. All we can really say from this is that the true mean is likely somewhere close to 4 or 5. But increase the sample to 100 points and we can already see a rough bell-curve. By the time we've made it to 100,000 points we have a very good visual match between histogram and curve. Adding more points doesn't change the look of the distribution or the printed mean and standard deviation.

The animated histogram is good at giving a broad overview of how things change as we add more points, but with only one frame for every factor of 10 we don't see a very detailed picture. Without printing more digits in the parameters of the title at the top, it's not clear just how precisely we know the mean and standard deviation for any particular sample size. For a better idea of this we can pick a parameter and plot that as a function of the sample size, from 2 points (when both sample parameters are finite) up to ten million. We'll look at the mean first.

Because things change much more quickly when there's only a small amount of data, the above chart is pretty useless. Taking the (base 10) logarithm of the number of points in the sample makes things much clearer.

With only a few points the sample mean is well above 4. But this quickly drops and stabilises once we're in to double digits. Beyond a few thousand points there's little discernible variation in the sample mean, but we can zoom in on the right-hand side and see the finer "wobble".

Here's how the standard deviation changes as we change the sample size (note: this is the standard deviation of the sample, not the standard error of the mean!):

The true mean used to generate the sample was 3.9172 and the standard deviation was 0.7200. We can see from the charts that we've got pretty close to these numbers with ten million data points without doing any rigorous statistical analysis. But we weren't that far away at ten thousand data points either. More data means more precision, but if all you needed to know was whether the mean was more or less than 4, ~1,000 points would have been enough.

To reinforce the point, let's look at just the first 100,000 data points and break these up into ten samples of 10,000. With each subsample we can use the same graphical technique as before. The coloured lines in the charts below show the results for the first 10,000 data points, the grey lines the other subsamples.

To be clear, the purpose of the charts isn't really to see the individual tracks made by one subsample. It's to show that the means and standard deviations of the subsamples are spread widely when each has only a few data points but, at least on a logarithmic scale, quickly converge as we add more points.

Of course all datasets are different and many don't come about through simple random sampling. Neither can you assume your real-world dataset will be as well-behaved as a large collection of computer-generated random variates from a single instance of the normal distribution. Moreover, the chart ideas above aren't meant as straight replacements for rigorous statistics work. But in certain cases they may complement it, e.g. by providing a sanity check of a statistical assessment or as a visual alternative for an audience with less technical expertise.

Looking for a comprehensive and rapid prototyping tool, which allows you to see exactly how your build will look and work before even writing a single code of line? Look no further. Download our Indigo Studio free trial now and see what it can do for you!

Viewing all 2223 articles
Browse latest View live


Latest Images