Yammer Analytics – Basic Reports

New post over on Perficient’s Microsoft blog about accessing and understanding Yammer’s basic reports.  This is part 1 of 3 leading up to SPSPhilly presentation on Yammer Analytics.

 

Access Services in SharePoint 2013

This is a brief recap of my Philly .net Code Camp 2013.1 presentation.

Detailed information about the requirements and configuration of Access Services can be found in this TechNet wiki. It is important to note that this functionality requires the following versions of the related applications.

  • Access 2013
  • SharePoint 2013 Enterprise Edition
  • SQL Server 2012

Access 2013 still provides the classic ‘Desktop’ databases we built over the past 20 years. The new version also includes a ‘Web App’ database which creates a SharePoint hosted app for the web front-end and a SQL Server database for storing the content. This is a huge change from the 2010 version of Access Services were the tables were converted to SharePoint Lists. SPLists provide a better scalability story than the traditional desktop database, but still have the list throttling limitations inherent in a SP List. Moving the table content to SQL Server addresses both the scalability and size issue by providing a true relational database engine to support the data. The use of SQL Server is completely transparent to the user as this communication is handled by Access Services communicating directly to SQL Server.

Access 2013 also provides some templates to get the process started. Many of the templates come in both desktop and web app versions. It is important to note as there is no switching from one version to the other after the template is chosen. There is also no upgrade path from previous versions of Access databases into the Web App database. This may be one reason why SharePoint 2013 still contains a legacy service for hosting Access 2010 web databases that come over in a content migration. Also desktop databases can contain VBA code behinds. So while the Access database artifacts all have complementary SQL Server objects, there is no analogous object for VBA code in Access 2013 Web Apps. This presents one of the limitations to the current implementation that changes are more configuration than customization via code. I think this is an area that will be improved in further releases, most likely by allowing javascript and CSS changes as opposed to c#/VB code behind files.

Let’s talk about the SQL Server component. As mentioned this must be a SQL Server 2012 database. It is also recommended that Access Services use a SQL Server instance separate from the SharePoint Farm instance. This will provide isolation for your Access Web App databases in case you need to connect to these databases through other means. A packaged web app deployed to the SharePoint Apps site creates an instance specific database each time the app is added to a site so these databases can quickly multiply. The Configuration Wizard will use the existing SharePoint Farm Database as the location for storing Access Web App databases. This can be managed in the Access Services page under Manage Service Applications in Central Admin. It is important to note that the databases created for the Access Web Apps are not included in any of the SharePoint Farm backups. These databases must be included in your disaster recovery scenarios for your environment to ensure this data is protected.

On the SharePoint side, the custom web app, once deployed to the app store and added to a site, takes on the properties of the hosted site. SharePoint permissions work on the Access Web App just like they would on a SharePoint list. Any site specific branding is also applied when viewing the Access Web App.

There is more coming on this subject as we had some good discussion and interesting questions raised during the session.

Entity Framework Questions from Philly PluralSight Study Group

This is format post of a discussion posted on Linked PluralSight Study Group

We had some good discussion last night with the topic of data in MVC applications and Entity Framework. Two questions that came up were about creating primary key field and creating a nullable int.

First, how is the primary key determined when running the upgrade-database command in Entity Framework migrations?  If you create a class and only specify strings, like this class here

    public class SomeClass
    {
        public string Username { get; set; }
        public string Email { get; set; }
    }

This will throw an error because there is no property that entity framework can point to as the primary key.  If you think back to the examples in the video, they primary fields were named id or had the class name + id as the property name.  They also had a type of ‘int’. 

So does this mean you have to use this convention to get the database created in EF?  No, it just means you need to leverage of the .net Framework in DataAnnotations.  You can specify the keyword [Key] to denote the property you wish to use as the primary key.  This allows you to specify any name you choose.  It even works with properties that are not ‘int’, so this class here,

    public class FooClass
    {
        public string Username { get; set; }
        [Key]
        public string myIdField { get; set; }
    }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

will create a primary key on the FooClass table for the myIdField property.

Applying automatic migration: 201301232131173_AutomaticMigration.
CREATE TABLE [dbo].[FooClasses] (
    [myIdField] [nvarchar](128) NOT NULL,
    [Username] [nvarchar](max),
    CONSTRAINT [PK_dbo.FooClasses] PRIMARY KEY ([myIdField])
)

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

The second question was if you make an int property nullable, by using int? in the property statement, like in this class here,

    public class NoIdClass
    {
        public string Username { get; set; }
        public string Email { get; set; }
        public int NoIdClassId { get; set; }
        public int? NumberOfChildren { get; set; }
        public int UserAge { get; set; }
    }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

would this result in a nullable int field in the database?  The answer is yes, it does.  Here is the SQL statement from the Package Manager Console.

Applying automatic migration: 201301232116310_AutomaticMigration.
ALTER TABLE [dbo].[NoIdClasses] ADD [NumberOfChildren] [int]

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Viewing the table in SQL shows this structure.

image

Hope this helps!  Let me know your experiences in working with these features.

Windows Store Application: HTML5/JS Validation Checks

The HTML5 specification makes it very easy to include validation on input fields for forms.  Here is an input element for a required field from an HTML5 form tag.

<input id="username" type="text" required />

The ‘required’ attribute is all that is needed to have the application know that this field must contain some value, in this case a text value based on the ‘type’ attribute.  In a typical HTML5 web application running in a modern browser when the user clicks the ‘Submit’ button, a client-side check is made to confirm that a value exists in this field.  If there is, the information is posted back to the server.  Otherwise, the postback does not occur and the user is notified that this field needs a value before continuing. 

In our WSA app, things work a little differently.  The ‘required’ attribute is still recognized, but the user is not informed of the missing information until after the submit code is called.  To protect the code from invalid entries, I included a check of the checkValidity() method of the form element containing my input controls. 

if (loginForm.checkValidity()) {
      //submit code....
}

This method returns ‘true’ when all of the validation checks in the form element are true.  If any are false, then the code does not run and the user sees the notifications for what they need to correct on the form.

SharePoint 2013 Geolocation

Geolocation or ‘spatial data’ is a new feature in SharePoint 2013 and it provides the ability to store data representing the latitude and longitude coordinates of an object right in your SharePoint list. The continued growth of mobility and mobile applications requires location aware data.  Now you can easily provide this type of information in your SharePoint data.

Enabling Geolocation

Activation of Geolocation has two parts.  The first is the connection to the Bing Maps service.  This requires you to register and request an api key.  You can do this on http://bingmapsportal.com.  Geolocation will work without this, but your maps show the message that your farm does not contain a Bing Maps key across the center of your map.  Once you have a key, apply it in PowerShell with the following command:

Set-SPBingMapsKey -BingKey $yourKey

This only needs to be done once for your farm.  If the key should change, the Set command can be called again to replace the value.  There is also a Get-SPBingMapsKey , which will retrieve this information.

The second step requires adding a field of type=’Geolocation’ to the list  The bad news about this step is that it is not available for the end user to add to their list from the browser, at least not out-of-the-box.  It must be added through code.  Good news is you have some options for implementing the code.  It can be run as a PowerShell script.  It can be run in a console application.  It can also be run on feature activation, for example, when creating a new list as part of a feature.  However you choose to do it, the process is the same.  Get the SPWeb object, Get the SPList where you want to add the field.  Call the AddFieldAsXml method on the Fields property of the list.  This method takes a string which is the xml definition of the field.

Here is what it looks like in PowerShell

$fieldXml = "<Field Type = ‘Geolocation’ DisplayName=’Location’/>"

#get spweb object
$web = Get-SPWeb($weburl)

# get list
$list = $web.Lists[$listName]

# first param – string with xml
# second param – add to default view
$list.Fields.AddFieldAsXml($fieldXml, $true)

#call update on the list.
$list.Update()

This is great if you only want to add the Geolocation column one list at a time.  If you modify the above code and add the field to the SPWeb object, instead of the SPList object, you now have a ‘Site Column’ with the Geolocation type. 

$fieldXml = "<Field Type = ‘Geolocation’ DisplayName=’Geo Location’ Name=’geolocation’ Group=’custom fields’/>"

#get spweb object
$web = Get-SPWeb($weburl)

# first param – string with xml
$web.Fields.AddFieldAsXml($fieldXml)

#call update on the web.
$web.Update()

A Site Column can be added to any list on the site from the ‘Add From Existing Columns’ link in the list settings.  You have now allowed the end user to add this column to any list on their site.  So now that this column is there, what can we do with it?

Using Geolocation

The geolocation field holds two values, the latitude and the longitude of the global position for the item in the list.  If you are at the location you want to use for the list item, this information can be automatically populated by clicking the ‘Use My Location’ link on this field.  This will then make a client-side call to determine location based on the IP information of the device making the call.  It does not require GPS on the device. 

The automatic method limits the locations you can enter for the list item, unless of course you spend a lot of time traveling.  So, this data can also be populated manually by the end user.  There are many sites that can provide these values given an address.  Bing Maps provides this data as well.  Enter an address in and click the zoom link for the address.

image

Once you zoom to this spot, right click on the spot.  This new information window contains links for directions and navigation, but also the latitude and longitude of the spot.  This same dialog displays when right-clicking anywhere on the map so you can find the geolocation data for any place on the map just by interacting with the map..

image

 

Viewing Geolocation

Now that geolocation is enabled and you can enter data, how is this information displayed in the SharePoint list?  When I added the geolocation column in PowerShell, it was also added to the default view.  The field displays a globe icon and clicking it presents a Bing Map with the corresponding point pinned at the center of the map.  This small of the view of the map provides scroll and zoom features.  The location can also be loaded in a Bing Maps page to leverage the all of the map features by clicking the View on Bing Maps link at the bottom of the window..

image

The addition of the geolocation column on the list also provides a new view template for creating custom Map Views for the list.  The columns selected to include in the view are displayed for each item listed in the view.  The items are presented on the left side in a listview type control with their locations noted on the map to the right.  The display area of the map will zoom out to display all of the locations.

image

Team Foundation Services on Windows Azure

Build provided a number of great announcements and information related to Microsoft’s Windows Azure platform.  This platform is a collection of products that provide hosting and services for start-ups, large businesses and even the hobbyist developer.  I plan to write about some of these services and how they integrate together.  I am going to start with one of the lesser known services, but certainly one that is an integral part of the development process, Team Foundation Services (TFS).

TFS on Azure is not new.  It was announced at Build.  Build 2011 that is.  Yes, it has been out for over a year and I still find people who are unaware of this when I mention it.  Originally launched on TFSPreview.com, it is still in preview mode, which means it is free for now.  It still offers all of the features you have come to expect from TFS, user story creation, bugs, tasks, iterations, and the wonderful burndown charts.  It does have a more formal url now in TFS.VisualStudio.com.  If you created a TFS Preview instance (ex. mysite.tfspreview.com), this does map to the new VisualStudio.com domain so no need to go out and reserve your preferred name on this domain.

Getting Started
Getting started is easy.  As with all of the Windows Azure services, you will need a Microsoft Account to sign up (formerly Live Account).  Sign in to the TFS site with your Microsoft Account.

Click the Sign up for free link.

image

Provide a name for your TFS environment.

image

That’s it.  You’re done.  You now have a full instance of TFS available to you and your team (up to 5 people per project).  For a small development shop, this solution provides all the benefits of TFS without having to install and manage the TFS infrastructure on premises. So now that this is provisioned, let’s connect it to your Visual Studio instance.

Connecting Visual Studio to TFS
I will use VS Express Web Edition 2012 to step through the connection process.  This will also work with the full versions of Visual Studio 2012.  It will also work with Visual Studio 2010, but this requires at least SP1.  VS Express Web Edition offers some really great tools for working with Windows Azure services, so it is the express version of choice if you are doing Azure or web development.  There are other versions including versions for Windows 8 and Windows Phone 8.  You can see them all here.  They will also connect to TFS in the same manner.

In the menu, select Team | Connect to Team Foundation Services.  In the top right corner of the dialog box, click the Servers button and in the Servers dialog, click the Add button.  Enter the account url for the TFS instance you created and click OK.  This is now the selected option in your TFS dropdown list.  It will display the project collections and projects available for connection.  You can add multiple server connections for TFS on Azure or TFS instance at work, allowing you to connect to multiple projects

image

Configuration for Different Windows 8 and TFS Microsoft Accounts
Windows 8 can be accessed with a Microsoft Account.  This makes it an easy experience to connect to all of Microsoft’s services without having to continually provide your credentials.  It can also be challenging if you want to connect to services with a different Microsoft account. 

If you are running Windows 8 and the Microsoft Account you used for TFS is different than the Microsoft Account used to log into your Windows 8 machine, you will see VS Express attempt to sign in to your TFS account with your machine’s Microsoft Account.  You will see 2 pop-up windows (try and then a retry) as it attempts to do this.  You will then get the following message.

image

This is because your Windows 8 OS Microsoft Account does not have permissions on this instance of TFS, which was created with a different Microsoft Account.  Hey, it happens and I’m sure not just for testing or blogging purposes.  Winking smile

Here are the steps to associate this TFS instance in VS if using a different Windows account.

  1. In VS Express, Open the web browser (View | Other Windows | Web Browser). 
  2. Navigate to Live.com.  If you are signed in, select the Sign Out on the menu in the upper right corner.
  3. Sign in with the credentials used to create the TFS account.
  4. Go through the Connect to Team Server steps above.  I’ve seen people reference shutting down and re-launching visual studio, but I did not need to do this.

This is only needed for the initial setup.  Now that VS Express is aware of this TFS instance, it shows in your list of available TFS connections in the dropdown.  There is the currently authenticated account information and a Sign Out / Sign In link in the lower left corner of this dialog so you can see what account you are using to connection and if needed, provide the proper credentials for each instance right in this screen.

Escaping SharePoint–Code Camp 2012.1 Presentation

This is just a quick recap of my presentation at this weekend’s Philly .net Code Camp.  The purpose of the talk was to show that it is easy to incorporate the tools and technologies used in other web based and .net 3.5 based applications into your SharePoint solutions. 

Here is a list of the items covered and some related links.  If you have questions, please leave a comment, so others can benefit from the discussion. 

  • Including Third Party Web Controls: SharePoint is a web technology running on IIS and .net 3.5.  So can you include these components in your SharePoint application pages and webparts?  Absolutely!  They work exactly like they do in .net web pages.  You may need to transform your SPListItemCollection using the GetDataTable method, but that is an easy call to make.  I showed how all of this works using Telerik ASP.NET AJAX controls, but I have tried other vendor controls in the past with similar success.  Look for a future post that takes you through the steps.
  • Using LINQ to Objects to Perform Group Join of List Data: this is all explained in a previous blog post.
  • Using jQuery to Expand and Collapse the Navigation Menu: this is also explained in a previous blog post.
  • Using jQuery Framework to Provide Rich User Experience: There are a large number of frameworks that can perform just about any function you need to do on the web, so there is little reason to write your own.  Getting them into your SharePoint solution can be tricky and require some additional trouble shooting.  For my process, I like to download the tools and get them working in a normal asp.net web application (File | New Project | Asp.net WebForms application 3.5 version).  This way you get to understand how they work without wondering if SharePoint is impacting your ability to use the tool.  Once I get it working here, it is usually an easy jump to push this into a SharePoint solution.  The tool I showed for this part of the presentation was Joyride and you can read all about this tool here.  I also plan to create a post about this part of the presentation showing the basic installation and how to move the ordered list from being hard coded in the application page into a SharePoint list where the user can modify the content.. 
  • Using SharePoint List Data to Populate an ASP.NET MVC 4 Single Page Application: The goal here was to take data created and stored in SharePoint and surface it through a web application.  You can find out about the new Single Page Application templates on this TechEd Netherlands video.  This was an interesting solution given MVC4 is still in beta and it works best with Entity Framework.  I will write this up and post a link here when it is done.

If you were there, don’t forget to fill out the evaluation for the speakers and the event itself.  (And get another chance to win an XBox.).

Follow

Get every new post delivered to your Inbox.

Join 269 other followers

%d bloggers like this: