Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MvcSitemapProvider v4 #119

Closed
maartenba opened this issue Dec 13, 2012 · 494 comments
Closed

MvcSitemapProvider v4 #119

maartenba opened this issue Dec 13, 2012 · 494 comments

Comments

@maartenba
Copy link
Owner

As it's been a while since ASP.NET MVC 4 is out and seeing that ASP.NET Web API dropped dependencies on System.Web, I'm kind of looking at evolving MvcSitemapProvider in that direction too...

vNext would:

  • Drop all references to System.Web and ASP.NET's sitemap provider model
  • Provide support for dependency injection
  • Get split into separate NuGet packages (Core containing assemblies, Web containing views / helpers)
  • No longer rely on the internal view engine

That's the technical side. What about features? I would love to hear what you are using today, what you would not want to miss, what's currently missing, anything!

@waynebrantley
Copy link

Not sure what it will get us by dropping those dependencies, but hey if you can do it - would be great.
Adding the dependency injection would be great.
Nuget split packages would help a bunch

Features:
Implement an httpmodule that can be plugged in for development with url of sitemaps.axd. That would give a full diagnostic view of the tree and relationships.
Scan for dynamic node implementations? #98

I think more than features, there are many bugs/issues:

Memory leak issues - #118
Documentation missing many features and solutions
Potential default implementation changes - #115
Order of menu items defined in code in relation to the xml file - #114
Some way to specify visibility (and other custom attributes) on the SiteMapNode attribute #113
Event not firing - #101

My latest code changes you just accepted fix this, so it should be closed: #102

Many 'issues' open that should be closed - but I think you are only one with rights to get these cleaned up. In general, just more activity and reviving of this great project! :-)

@NightOwl888
Copy link
Collaborator

Here is my wish list:

  1. Dependency injection should be able to be done with any DI container (or at the very least all those that support property setter injection - sometimes there is no other good way except using setters).
  2. Dependency injection should support both IDependencyResolver and (more appropriately) IControllerFactory. IDependencyResolver, while being the official Microsoft sanctioned way of doing DI in MVC, uses the service locator anti-pattern. See the book Dependency Injection in .NET p.207 for details. Of course, if neither are used and a custom solution made it would also work.
  3. Dependency injection should be able to provide all configuration so it is not read directly from the config file.
  4. Move the logic about loading nodes from external dlls outside of the main class. It looks like an assumption was made about plugins being part of the current AppDomain, which isn't the case when using MEF or other DI containers to create plugins. Perhaps it would be better just to make an extensibility point for plugins rather than providing an implementation.
  5. Support for getting the entire node tree from the Business Logic Layer (instead of a file) so it, in turn, can load from other sources (such as a database or a Lucene.NET index). Of course, doing this can also make the plugins the responsibility of the BLL (where it should be).
  6. Multi-tenant support. Each tenant should be able to load, cache, and invalidate the cache for its own site structure individually.
  7. Support for multiple paths to the same page. I did this with the standard ASP.NET sitemaps implementation using a combination of a user session variable that was set on page get and a different XML file for each path (maintenance nightmare). It worked, but I would like to find a more maintainable way.
  8. Unit tests. When there is no documentation, these can be the next best thing. Not to mention, they make the code easier to change without causing negative effects.

That said, I am not willing to wait until you get around to it to make a version I can use. I'd be happy to contribute what I come up with back into this project if there are no objections.

I was actually looking into creating another sitemap provider to implement the fuctionality so I can reuse the UI and sitemaps XML functionality that is already here, but based on your comments about dropping the dependency on System.Web perhaps it would be best to start off with some kind of Sitemap Loader interface to handle the starting point where the provider would normally go? I am open to suggestions.

Of course, I could just make a standard SiteMap provider that is just a shell that delegates its work to other injected classes and then later the SiteMap provider can be dropped without having a huge impact on the rest of the project.

@maartenba
Copy link
Owner Author

I’m working on some initial bits for vNext, if you want I’ll post them on a separate brancg.

The idea there is to:

  •      Use IDepedencyResolver out of the box
    
  •      Have a node tree in memory which is mapped to a SiteMap for every request. This should provide more flexibility and better multithreading. Also multi-tenant aspect is easier to do based on that. 
    

@NightOwl888
Copy link
Collaborator

Sure, I'd like to see the latest and greatest. Thanks.

@maartenba
Copy link
Owner Author

See branch "v4". Do note that this is very much work in progress and that no final product (nor working application) is in there. It does give you some pointers of where I'm heading.

@NightOwl888
Copy link
Collaborator

I took a look and had a few thoughts/questions:

My hope is to introduce a couple of seams into the provider 1.) to allow the tree (or a tree provider) to be injected by the host application and 2) to allow the configuration options to be injected by the host application. Naturally, it makes sense for them to both default to loading from the Mvc.sitemap file and the web.config file, respectively. It also looks like it makes sense to have a dynamic node provider (seam) that defaults to an assembly scanner.

It looks like you created an interface named ISiteMapSource that would work great for the tree injection, but there is no way to inject a custom provider other than through the config file.

I realize this is just prototyping, but what was the reasoning of using the ASP.NET sitemap provider as a base? That is what I had in mind as well until you mentioned you might do otherwise.

Pros:
ASP.NET sitemap provider provides a clean seperation between the "data access" code and the presentation logic.
ASP.NET sitemap provider is an interface that many other developers are familliar with.

Cons:
YAGNI - ASP.NET sitemap provider technically doesn't need to be there.
ASP.NET sitemap provider is somewhat difficult to inject dependencies into. It could be overcome by using property setter injection or by raising events to provide dependencies (for those who have not yet entered the DI world).

Personally, I have no preference if the ASP.NET provider stays or goes, but clearly the project needs to go in a different direction if the provider is not used.

Finally, how much time do you have to dedicate to this project? The reason I ask is I could either start working with this full time now or circle back around after about 3 weeks of working on other bits of my app and join in when you have more of a base to work with. Of course, if this project doesn't change much in that timeframe it doesn't make much sense for me to wait.

@maartenba
Copy link
Owner Author

I was thinking of building the provider in 3 stages.

  1. At application start, build an in-memory sitemap that contains all nodes based on ISiteMapSource
    1.1. Create list of nodes
    1.2. Allow for a visitor class to run over all nodes, for restructuring the sitemap or optimizing nodes
    1.3. Store this in an application-wide cache
  2. For every request, clone the original sitemap containing only appropriate nodes (which are accessible and so on)
  3. Map this to a traditional sitemap tree which uses the ASP.NET provider model

That last stage is of concern to me: drop it or not? I would like to drop it but this breaks anything out there that uses sitemaps as the data source.

I'll be traveling the next weeks so time will be limited, feel free to do some prototyping already. Earlier feedback is better imho :-)

@NightOwl888
Copy link
Collaborator

Agreed. Dropping the provider trashes any chance for a smooth upgrade. But then, it could also make the library easier to work with for newcomers.

Looks like we are pretty much on the same page.

  1. I was thinking instead to make this step lazy-loading rather than at application start.
    1.1. Agreed - except it would be nice if an alternate implementation could be provided.
    1.2. Not sure if I totally understand this step - is it a replacement for the "dynamic nodes" in the current library?
    1.3. Agreed - also makes sense to allow an alternate implementation as many applications have a centralized caching manager they would want to take advantage of.
  2. This might be where multiple paths to the same page can be implemented. It definitely can be used to create alternate menus that follow a different structure than the main menu.

Just so you understand my end game, I am planning to make an end-user drag and drop tree interface that can be used to create/rearrange the site structure. There is a fully-functional sample of exactly what I would like to achieve here: http://www.bigace.de/demo.html. Have a look at the v3 administration interface and go to Pages > Administration from the menu. The pane on the left allows for creating and moving of pages. All changes are reflected in the main menu of the site and the site map path.

Note the following:

The tree isn't simply translated, there is a separate tree structure for each language.

Anyway, I will get started working on a prototype. For now, I will assume the provider will stay, but my provider implementation will just be a shell that can be removed without too much trouble. I will try to carry forward your ideas, but my main focus will be in creating an implementation that can be easily integrated with a custom business layer and by extension, a custom data source.

@maartenba
Copy link
Owner Author

On 1.2: that's just a step which allows you to plug in between tree creation and tree usage.I was thinking to allow things like node optimization (e.g. for some nodes the url can be made static for all requests instead of having to resolve it all the time). It also allows you to restructure some things, or replace the entire tree with another one. Look at it as the last chance to do anything with the tree that applies to all requests from then on.

But yes, seems we're on the same page. Looking forward to some prototypes!

On a side note: I'm planning on keeping the current SiteMapXmlSiteMapSource to have at least some backward compatibility. Other than that feel free to cut the cords with the ASP.NET provider model.

@waynebrantley
Copy link

I do not see any real problem with dropping ASP.NET provider model support. The only things that used it were webforms based anyway.

@NightOwl888
Copy link
Collaborator

@maartenba

I am pursuing the path of dropping support for the ASP.NET provider. This turned out to be a pretty deep rabbit hole of rooting out necessary bits from the .NET framework, but it is clear that to gain control over the lifetime of the in-memory sitemap structure it is easier if it is not part of the provider.

The next step is to try to make what I have function the same way it did before by delegating calls from an actual provider. However, although I have a fairly good idea what the dynamic nodes and assembly based (reflection) nodes are for at this point, there doesn't seem to be any samples for the reflection based nodes in the project and the samples provided for dynamic nodes are very basic. Do you have a working demo project you can provide so I can see how this works?

I still have a fair amount of refactoring to do after getting the code functional, but once I have working code, keeping it in a working state should be much easier.

@maartenba
Copy link
Owner Author

Nothing I can share. The reflection basednodes can be created using one of the [MvcSiteMapNode] attributes, important is to specify their ParentKey property to build the tree structure.
Dynamic node providers are in the MvcMusicStore sample, although they indeed are pretty basic. In theory you can add child nodes to the nodes generated there and so on.

@NightOwl888
Copy link
Collaborator

Yesterday, I finally got this to a point where it is functioning without using the ASP.NET provider. I ended up dropping the ASP.NET provider altogether because it was just getting in the way of seeing the big picture. A compatibility layer (builder) can still be added to load the data from an alternate source (such as an ASP.NET sitemaps provider) if needed. If you are interested in having a look, the latest can be found here: https://github.com/NightOwl888/MvcSiteMapProvider/tree/v4, though it isn't very pretty at this point, is poorly documented, and the localization code isn't yet fully implemented so nothing shows up for localized nodes.

My vision is to have an object graph of smart business objects whose purpose is to maintain the hierarchial relationships between nodes. The outside world should have no direct access to this data - it should only have access to methods and properties that all go through the business logic layer within (or injected into) the object graph. If you are familliar with the CSLA framework, this is sort of what I have in mind for managing the behavior associated with the tree structure itself (without the dependeny on CSLA, that is). Here is an overview of these objects.

SiteMap - Replaces SiteMapProvider and is essentially the root object of the graph.
SiteMapNode - Replaces the original SiteMapNode in System.Web.
SiteMapNodeCollection - Replaces the original SiteMapNodeCollection in System.Web.
SiteMapNode has other child objects (such as Attributes, RouteValues, etc) that do not yet have their own implementations, but probably will have.

As for the rest of the code, I envision a set of services whose default implementation can easily be swapped by a custom one by simply creating an implementation and then changing the DI configuration to inject them. This will allow for a wide variety of use cases.

As it turns out in a multi-tenant application, deciding when a request should belong to one tenant vs another is an application-specific thing. Therefore, the logic of when a new sitemap should be built and cached needs to be easily overridable by the application. I made a SiteMapLoader to handle this task, but as it stands it is still broken because it allows the UI to override the "current" sitemap in some cases but not others. It also doesn't have a very robust way to map a specific SiteMapBuilderSet named configuration to a given request. This is something I will address shortly.

Semantics:

Now some decisions need to be made to shape the direction from here. Some things jumped out at me as being unusual, but it is difficult to tell exactly what the underlying reasons for their being are. Other things just need to be put out there for discussion.

  1. In the DefaultSiteMapProvider, there is a method called NodeMatchesRoute. This method first determines whether the route matches the RouteValues, then determines whether the route matches the Attributes of the same node. I checked and couldn't seem to find the place where the route values were actually being supplied to the Attributes, but being that I have working code without these values being supplied, I am wondering if there is really any point to making the comparison with Attributes or if there is some hidden underlying reason why the route values need to also be in the Attributes collection? If not, this comparison logic could be made much simpler.
  2. I ended up flattening the hierarchy of XSiteMapNode, XRouteSiteMapNode, and XMvcSiteMapNode because these classes didn't appear to provide any value other than syntactic sugar. Is there some underlying reason why these were broken up this way? I am starting to see patterns of other ways the node class can be broken up (such as a class that does request caching of certain methods and cascades the calls to the rest to the underlying implementation, but maybe there is some value to your model I have overlooked.
  3. The provider models could be configured in the XML file in a hierarchy (originally, I had assumed the only way to configure them was in a list). There is a ParentProvider property that provides access to the parent, and the GetParentNode() method makes a call to the property when it reaches the top of its hierarhy, but other than that there is no implementation of this feature. However, the XmlRolesAclModule makes a call to provider.ParentProvider. I was unable to work out exactly why this call was here and what should be done in its place (if anything).
  4. The AddNode() method from the DefaultSiteMapProvider first makes a call to FindSiteMapNode() and then RemoveNode(). I tried taking these out to see what happened and it crashed when I tried to add dynamic nodes to the structure. Do you have more information about this issue that could perhaps be used to move this fix to a more appropriate place?
  5. Microsoft's original implementation of SiteMapNode has a locking mechanism to make the objects read-only after they have been loaded. This was only implemented by XmlSiteMapProvider (which this project has no reference to). I am thinking about going in this direction because I can't think of any reason why the presentation/UI code should need to modify any properties of a SiteMapNode (in fact, changing them in many cases can have disasterous consequences). Is there any specfic reason you can think of why leaving the properties writeable after they have been loaded may be necessary?
  6. Dividing into multiple DLLs. My first thought was - not another dependency! But I have come around to look at this in a different way. An application may want to implement providers and/or other interfaces or even inherit classes from within the application's business layer, and setting a reference to a DLL that references System.Web.Mvc means the business layer will have an indirect dependency on a presentation technology (yuck!). So I am seeing that a small part of the code should go into a shared library of some sort that is referenced by a separete presentation DLL within this solution. Clearly the ISiteMapBuilder and IDynamicNodeBuilder classes belong in this business DLL. Calling it "core" implies that it does more than it should though - essentially we just want to make a safe DLL to reference without indirectly referenecing the presentation layer. However, most of the code should probably end up in the presentation DLL because MvcSiteMapProvider is for the most part a presentation technology that should sit above any application's business layer.
  7. Before you stated for the Visitor stage "for some nodes the url can be made static for all requests instead of having to resolve it all the time". You made it sound as if most nodes MUST be URL resolved all of the time. I am now seeing the value in making a Visitor stage, primarily to resolve the URLs before caching. But I am wondering what cases you would NOT want to resolve them before caching and what cases you would wait until later?
  8. On the other hand, I don't see the value of restructuring the node tree using the visitor stage. What would be the point of a builder class structuring nodes one way just to have a visitor class change it in a different way? I say if you need a different structure, just provide your own builder that can build that structure right the first time. In fact, I am thinking that the code that drives the visitor should just be a hard-coded recursive loop over the node tree that fires an Execute() method and supplies the current node as a parameter.

As for referencing IDependencyResolver, I just didn't see any need. Using constructor injection, all dependencies can be provided by the DI container directly, and doing so gives the DI container control over the lifetime of the objects. This has a distinct advantage here - if we have a project with 10,000 nodes and we want to inject a UrlResolver service into those nodes that contains custom logic, nothing is gained (but a lot of memory and/or CPU cycles lost) by creating 10,000 UrlResolver service instances using IDependencyResolver. On the other hand, using the DI container to supply these dependencies can ensure that only 1 instance is ever created regardless of how many nodes use it.

To support cases where no DI container is used, I plan on making a basic object composer to do so that contains the logical defaults. It would read configuration values from specific AppSettings keys and use them to configure similar options to what are available now. It would just require a single line of code to be placed in the Application_Start method, such as Composer.Compose(). That said, I don't plan to spend that much time on this because it is not something I will need. A lot could be done here - custom configuration section for MvcSiteMapProvider, configuration for custom implementations of interfaces put into the config file, etc. However, being that this class would be totally replaced if a DI container were in use, you may want to encourage developers to use DI if they have a pressing need to override an implementation with another. Alternatively, this could be built out into an effective way to configure MvcSiteMapProvider without using a DI container.

Ok, that turned out to be very long-winded. The sooner I get feedback on these points, the more likely the feedback is to influence the direction I go. Thanks.

@waynebrantley
Copy link

One other issue I have....is that many times I can have the same sitemap node in several different trees.
Imagine a simple example where you had the same menu item in two different areas.
This makes sort of a 'circular' structure. Wonder if that can be part of this new design?

@NightOwl888
Copy link
Collaborator

@waynebrantley

Is this different that "Multiple paths to the same node" I have in my wish list above? This sounds like a similar scenario, or at least the way you describe it as "same sitemap node in many trees" sounds like the same solution I came up with before - creating multiple XML files and using session state to track the current one, overriding it to the "default" if anything but a certain set of pages were requested next chronologically.

In short, my solution was to fix an issue with a many-to-many relationship between categories and products where one product could belong to many different categories.

I plan to address this, but not until after a few more rounds of refactoring. I haven't thought of a way to do it without using some kind of user session based tracking, though. Well, there is using the URL querystring to track the path, but that is not SEO friendly at all and is not a path I plan to take. Let me know if you have any other ideas as to how this could be implemented. The main issue at hand is how to tell which path you are at in the map if a node is requested that exists in 2 different places?

I guess another possible solution to this would be to actually put the same exact product on multiple unique URLs. This would force the burden of properly using the canonical meta tag onto the developer, but since this is something that is supported by both Google and Bing already this could potentially be a solution.

@NightOwl888
Copy link
Collaborator

Wow - the light bulb just came on. I never really considered making the canonical meta tag generation a part of the sitemap, but now that I think about it, it makes perfect sense to make a "CanonicalHelper" in the same way the "TitleHelper" works.

@waynebrantley
Copy link

@NightOwl888 - yes that is what is needed....a many to many relationship would be great. When it exists in two places and you need to 'find' where you are in the sitemap, I was thinking they would have a priority - so the FIRST item in the many-many relationship would be the one that it is in. So for your example if a product was in categoryA and categoryB...and you went to the product page...it would show categoryA as the parent - if category A was the 'first' link in the many-many relationship.....This could of course be an injected piece of code where users could decide some other logic based on their specific requirements.

@NightOwl888
Copy link
Collaborator

@waynebrantley - lets think about a side effect of that for a minute. If I were a customer browsing your site and first navigated to categoryB, the sitemap path would be something like "Home > CategoryB". Now I choose the product (lets just call it Product1) from the categoryB page. When the product page were displayed, it would show the sitemap path as "Home > CategoryA > Product1". It just jumped from categoryB to categoryA for no apparent reason.

To me it feels like this behavior is broken and it is likely to cause confusion to the user, would you agree? This is where I took a different path to solve the issue before, and ended up swapping different XML structures altogether, keeping track of the "current" one in session state.

2 other solutions I can think of:

  1. Make a similar solution that I came up with before by swapping structures according to the user's session. Before this had to be done by swapping providers, but now the semantics of when a different sitemap structure is used is controlled by the ISiteMapCacheKeyGenerator. Every time a unique Site Map Cache Key is made, a different sitemap structure is stored. I did this specifically to handle the case of multi-tenant sites. The building of the sitemap structure is controlled by a ISiteMapBuilderSet (which there can be multiple configured by the DI container), so we would build a structure that contains Product1 in categoryA in the first builder, the second builder would put Product1 in categoryB, and so on. The downside of this solution is the amount of effort it takes to maintain, as the number of separate structures you maintain for a single site could grow exponentially. In addition, the user's position would need to be kept track of in a stateful way. Also, the mapping relationship of SiteMapCacheKey to SiteMapBuilderSet would need to be maintined - naturally, there is now an interface where this can be done, too.
  2. Simply allow the product to exist on multiple URLs. I know, you are thinking this is really bad for SEO, and it is until you add a canonical meta tag to tell Google and Bing which URL is the primary one. This solution has the added benefit of being more user friendly - a web saavy user will know how to modify the URLs /CategoryA/Product1 and /CategoryB/Product1 to end up at CategoryA and CategoryB. I admit I haven't worked out all of the details of how this would work with MVC - I am betting there is a way to coerce the router somehow into resolving 1 URL vs another that maps to the same area, controller, router, and id. But it is easy to see how the nodes could be uniquely identified in this case, and no user state needs to be tracked. Not to mention, the site map path would work as the user expects. It is simply the most natural way I can think of to handle the many-to-many relationship case.

Anyway, as you can see I am leaning toward the second solution, but that isn't to say the first one can't be worked out as well. Some people probably wouldn't mind resorting to session state and swapping as long as the site had the exact semantics they wanted - that is, each product existing on exactly 1 URL. It is clearly more complicated that way, but as I said, I worked it out once before.

@waynebrantley
Copy link

Session state is not a realistic option in my opinion.

@NightOwl888
Copy link
Collaborator

Well, stateful doesn't necessarily mean session state. Using a cache or a static member with a user-specific ID is one option or storing the state in a cookie is another. Of course, in the end they all use cookies to manage the user ID anyway. Realistically all that is needed is a short string identifier to track the ID of the cached sitemap the user last navigated to, so it would not be very expensive in terms of memory. A cookie might be a good fit in this case.

I am just trying to avoid storing extra information in the URL because that would be the kiss of death for SEO.

Viewstate is not an option because the information we need is lost going from page to page.

There is another option, but it is not very reliable in my experience - that is to use the URL referrer header to determine the last position the user navigated from rather than keeping it in state. Unfortunately, not all browsers reliably send this value and some may have different rules about whether to send it based on whether or not you are navigating within the same domain. It might be worth a shot, though - maybe there is a way to make it more reliable.

In your opinion, is the use of multiple URLs for the same product with a handy tool to manage the canonical tag on the extra URLs a realistic option?

@waynebrantley
Copy link

Yes, I think that is realistic.

@maartenba
Copy link
Owner Author

I would make it a small pluggable component. The idea with v3 was to use specify this scenario in your action method by fiddling with the sitemap structure directly although having it stored in a cookie by default seems more logical to me. (note "by default", would be great to be able to plug in one that runs on top of session state / database / whatever the developer prefers)

@NightOwl888
Copy link
Collaborator

Actually, after doing some research, it seems that the best approach would be a combination of the ideas I have been toying with. According to http://forums.asp.net/p/1580897/3985044.aspx, a common approach to the problem is to simply tack on extra position info to the URL. In MVC terms, that would just be an extra parameter in the route.

This would normally be a problem for SEO. However, if there were also a way to manage the canonical tag using a similar component to the TitleHelper and a logical way to choose the primary one in the sitemap node/XML structure, then this wouldn't be so much of an issue. Essentially, this tool could be used to tell the search engines that the URL with the extra positioning info is equal to the URL without it, and then all the pieces fit, no state is necessary. Then it would not really matter if you want to use the same URL with different query string parameters or multiple URLs for the page - this choice can be made by the developer at whim.

One thing is pretty clear, the sitemap is a good place to manage the canonical tag data to make it virtually automatic. There is only 1 way to declare the canonical tag and this behavior is shared across all web sites.

@NightOwl888
Copy link
Collaborator

@maartenba

I am now several rounds of refactoring in, but have still not tackled the fact that HttpContext being passed around as API parameters isn't a testable approach.

I see that Microsoft's answer was to create an abstract HttpContextBase with a similar interface, with the concrete type HttpContextWrapper to fill its shoes. However, I also noticed that there are not one, but 2 overridden HttpContextWrapper classes in this project - HttpContext2 and HttpContextMethodOverrider.

It appears as if the HttpContext2 implementation is supposed to cover the general case for MvcSiteMapProvider as it is used in every place in the project but one, is this a correct assumption?

However, the other implementation (along with its partner, HttpRequestMethodOverrider) is a bit of an enigma. It overrides the HttpMethod member of the request object, but only in the case where it is instantiated with an HttpMethod passed to its constructor (second parameter) that is not null. However, the only place it is used in the project, it is declared like this:

HttpContextBase httpContext = new HttpContextMethodOverrider(context, null);

So, in effect we have a general HttpContextWrapper class that is exactly the same as Microsoft's implementation. I know there must have been some reasoning behind this, but it wasn't left behind in the code. If you can explain this, please do.

Anyway, I am thinking about taking the route of creating an injected abstract factory that can both create a generic HttpContextWrapper and wrap it in one or both of the other layers if necessary (probably with a decorator pattern). However, both of these classes desperately need new names to clarify their intent. I am thinking about going with either MvcHttpContext or MvcSiteMapHttpContext for HttpContext2 (I am not sure which level it applies to or if it even covers the general case). It isn't clear whether the other one should just be swapped for HttpContextWrapper and scrapped or if there is some future use for it.

One more thing. Typing "new HttpContextWrapper(HttpContext.Current)" or similar every time we need it passed into the SiteMap API is not exactly brief or intuitive. So should the external interface be changed to HttpContextBase, or should there be a method that accepts HttpContext that calls a factory internally to wrap HttpContext and then cascades the call to a protected method that accepts HttpContextBase? I am leaning toward the first approach, as in my book less code is better and the primary user of this will be the UI layer in the same project so the non-intuitive API is not such a big deal.

@maartenba
Copy link
Owner Author

The HttpContext2 implementation covers one fix for a closed property in HttpContextWrapper so yes, that assumption is correct.

HttpContext2 provides us with the option of having a HttpRequest2 returned for the Request property, which in turn provides us with the possibility to implement our own AppRelativeCurrentExecutionFilePath and CurrentExecutionFilePath properties, required for having UrlBuilder generate the correct URL for a given set of RouteData even when that set of RouteData is not active as the current requested URL.

The *Overrider classes do something similar but good question, I think they can be merged with the *2 classes.

I think we should be using HttpContextWrapper everywhere but have it instantiated either on demand if some custom properties are required for some things, or via the IoC container which can provide us with new HttpContextWrapper(HttpContext.Current)

Thoughts?

@NightOwl888
Copy link
Collaborator

I think we should be using HttpContextWrapper everywhere but have it instantiated either on demand 
if some custom properties are required for some things, or via the IoC container which can provide us 
with new HttpContextWrapper(HttpContext.Current)

For a while I have been considering just removing HttpContext from the API altogether and passing it in as a constructor argument only to the classes that directly use it. However, there are some methods where the intent can be made less ambiguous if the parameter is included.

For example, I made a method that is used to create a sitemap cache key that is used to determine whether the current request should access a cached sitemap or store a new one. Without being passed HttpContext, it is difficult to understand how the method should be implemented.

Presently, there are only 3 methods each in the interfaces of ISiteMap and ISiteMapNode where HttpContext is explicitly passed in. Technically speaking, these could all be eliminated. In fact, neither of these classes use HttpContext directly - it is simply passed along to other services that use it. The question is, would Node.IsAccessibleToUser() be any less clear than Node.IsAccessibleToUser(HttpContextBase)? Or more properly, is there any realistic reason to pass in a different HttpContext than the current one?

For places where HttpContext.Current is grabbed out of thin air instead of being passed through the API as a parameter, I agree that it should be passed as a constructor argument that is injected by the DI container.

@maartenba
Copy link
Owner Author

It would not be less clear, both are viable imho

@NightOwl888
Copy link
Collaborator

// From ISiteMap
ISiteMapNode FindSiteMapNode(string rawUrl);
ISiteMapNode FindSiteMapNode(HttpContext context);
ISiteMapNode FindSiteMapNode(ControllerContext context);
ISiteMapNode FindSiteMapNodeFromKey(string key);
bool IsAccessibleToUser(HttpContext context, ISiteMapNode node);

One thing is for sure, FindSiteMapNode() with no parameter is ambiguous. I suppose in the spirit of things it could be renamed FindSiteMapNodeFromCurrentContext() to make it clear. Thoughts?

// From ISiteMapNode

RouteData GetRouteData(HttpContextBase httpContext);
bool IsVisible(HttpContext context, IDictionary<string, object> sourceMetadata);
bool IsAccessibleToUser(HttpContext context);

GetRouteData() also seems like it might not make sense without context being passed in, or alternatively being renamed to GetRouteDataFromCurrentContext(). I also discovered that this method uses the context parameter directly internally. It matches the signature of the GetRouteData() method from RouteTable.Routes. Should probably leave that one alone, as it already uses HttpContextBase.

However, it looks like the other methods still make perfect sense without the extra fluff. On the other hand, I have this sinking feeling that Microsoft had some master plan that I am not aware of and there was some better reason for those context parameters being passed than you or I can come up with in a few minutes.

@maartenba
Copy link
Owner Author

Their main idea was to move from System.Web to System.Web.Abstractions to make things testable. No idea for all the passing around though :)

@NightOwl888
Copy link
Collaborator

The only thing I can come up with is continuity. Without the parameter, there would be no way for the presentation or UI layers to choose the specific context to pass along to the service that uses it. Using DI, a strategy pattern or similar could be used to choose a configured context object, but it would still require a parameter to choose a unique instance.

That said, there are a few methods and properties (from Microsoft) that simply grab HttpContext.Current out of the air and don't give the UI layer a choice. Specifically, they all call IsAccessibleToUser(). I think that one (or 2, rather) is safe to change.

IsVisible cascades the call to the visibility provider. That one is safe to change, I think - assuming nobody would ever want to pass anything other than HttpContext.Current.

The only one up in the air is FindSiteMapNode(HttpContext). It just doesn't feel right calling a Find() method without passing any criteria. I think renaming it might make it OK as long as the name is clear where the criteria is coming from. I guess if anyone screamed that they need a way to pass a different context, an additional method could be added to the interface later.

Naming Conventions

Speaking of names, I was also thinking that it might be a better choice in the DI world to name the "Default" concrete class according to the conventions that many DI containers use. For example, with StructureMap nothing extra needs to be configured if you have an interface named ISomething and a class named Something - the container already knows implicitly which one is the default.

Using a naming convention where all of the default concrete instances is prefixed with "Default" is supported, but requires extra configuration. Naming the default instance after the interface is supported out of the box, and from what I have read, some other DI containers do this as well.

Thoughts?

@NightOwl888
Copy link
Collaborator

Hmm.. Nuget.org was up when I checked a few minutes before your post, and it appears to be up now. Are you referring to the API that you use to push the packages?

But, just like 2 weeks ago, it has been several minutes since I pushed to NuGet, the packages are not there, and I didn't get any email indicating that there were failures. Just - nothing.

It seemed like MyGet was having outages 2 weeks ago so I didn't make an issue of it, but now it looks like a pattern.

@NightOwl888
Copy link
Collaborator

Oops, I spoke too soon. One of the packages just showed up on NuGet.

@NightOwl888
Copy link
Collaborator

Anyway, I would hardly call displaying failed status on the UI for a few minutes before displaying success on the same build "failing silently". Now that I know it can do this I can probably work with it, but it caught me off guard because I didn't expect the status to change from failed to success a couple of minutes later.

@maartenba
Copy link
Owner Author

Check status.nuget.org - they had a mayor DNS outage the past hour.

@NightOwl888
Copy link
Collaborator

@maartenba

It looks like Microsoft made an out of band security patch that causes inconsistent build behavior between machines that have it installed and those that don't. In particular, they updated both MVC 3 and 4 from 3.0.0.0 > 3.0.0.1 and 4.0.0.0 > 4.0.0.1. The MVC 4 update worked after some tweaking, but I am unable to get the MVC 3 build to work both on my local environment and the MyGet server. Details are in my post and the included links.

I was trying to get a patch out, but it seems that the only options to do so are

  1. Revert the configuration back where it was before and remove these updates from my development machines.
  2. Have you upgrade the MyGet server so it will build there, and then instruct anyone who wants to build MvcSiteMapProvider that they need this update installed in order to build.

Since this is a required Windows update that fixes a critical issue, going forward seems like it would be the better choice. But then the MyGet server may have other broken projects as well.

Thoughts?

@maartenba
Copy link
Owner Author

Isn't that just a NuGet dependency update for mvc3 and 4? For 2 we can install it.

@NightOwl888
Copy link
Collaborator

Unfortunately not. For whatever reason, the NuGet package won't work on a computer that doesn't have the update installed. We have always used the local MVC 2 and 3 references and NuGet packages for MVC 4 and 5. I don't think that using the compiled library will be a problem to reference with the new version (NuGet will install the assembly redirects into web.config as appropriate), but building the library is (at least for MVC 3).

After taking some time to soak this in, I suppose that some check could be added to the MSBuild project file (or the build script) to dynamically switch between the MVC 3.0.0.0 in the GAC if it exists and 3.0.0.1 using NuGet if it doesn't. The main issue is that the security update completely removes 3.0.0.0 to eliminate any possibility that someone can still use it (and force them to apply the security patch), but apparently it makes some other change to the system (perhaps a assembly redirect policy) to enable 3.0.0.1 to build.

@NightOwl888
Copy link
Collaborator

Ok, I did a little experimenting, and it turns out that using a condition will work to get the build to work on both machines that have the update and those that don't.

<!-- Due to the windows update MS14-059, we need this hack to ensure we can build MVC3 both on machines that have the update and those that don't -->
<Reference Condition=" Exists('$(windir)\Microsoft.NET\assembly\GAC_MSIL\System.Web.Mvc\v4.0_3.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll') " Include="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" />
<Reference Condition=" !Exists('$(windir)\Microsoft.NET\assembly\GAC_MSIL\System.Web.Mvc\v4.0_3.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll') " Include="System.Web.Mvc, Version=3.0.0.1, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL">
  <Private>True</Private>
  <HintPath>..\packages\Microsoft.AspNet.Mvc.3.0.20105.1\lib\net40\System.Web.Mvc.dll</HintPath>
</Reference>

But the clincher is that if the build is done with the update, no machine without the update can use it. If the build is done on a machine with the update, then it can be used whether the update is installed or not, and the update will automatically migrate it to the patched version via assembly redirects.

So the bottom line is, we can get away with this hack until the MyGet server is upgraded. At that point, the upgrade will be required for anyone who uses MvcSiteMapProvider.MVC3.Core. Also, any developer who does their own build will need to follow the same criteria - if they build on a machine with the update, then all applications that use that build require the update. This only applies to users of MVC 3, for all other versions this upgrade is seamless.

@NightOwl888
Copy link
Collaborator

BTW - Although the build clearly failed, MyGet showed that it succeeded with 2 packages. I am not sure if this is because the build script is returning a non-failure status or because MyGet isn't picking up a failure.

@maartenba
Copy link
Owner Author

Which branch/commits have you been doing this on? Will see if I can come up with something.

@NightOwl888
Copy link
Collaborator

I have already pushed it through to release. I am not certain if you mean the problem with MyGet reporting a success or the fact that it seems that this MVC 3 hack is required in order to build consistently. I have no problem with backing out this hack if there is a solution that works better. But since the MyGet build server has not had the Windows Update applied yet (I think), there shouldn't be any difference between the current version and the previous one in terms of assembly bindings.

Actually, with this hack in place it won't be possible for us to tell by the behavior of the build when the server is patched. It would be nice to inform anyone using MVC 3 that an MVC upgrade is required when the time comes that the build server is upgraded.

The build failure happened on commit 2087df5a8deb731aa30c49128e810b38b9f20977, before I applied the above solution in commit 498feab5cf3c0c45cca4423e65beb83c5281addb, if you want to take a crack at it.

I have been testing locally on 2 machines - one with the Windows Update applied and one without.

@maartenba
Copy link
Owner Author

Thanks Shad. On the uild success/failure: MyGet currently only fails when no packages are produced. There's an option underway to decide on when a build should fail.

@NightOwl888
Copy link
Collaborator

@maartenba

I have recently created a port of the BoboBrowse (Java) faceted search engine in .NET. It is now in beta, available on NuGet, all unit tests are passing, and could use some more testing to ensure it is stable.

Note that there is a port of BoboBrowse 2.x on codeplex, but I couldn't get the owner of the repo to cooperate in order to include a NuGet package, so I had to host it myself. That version is also not compatible with the latest version of Lucene.Net (3.0.3), and not documented, but this one is. It is a port of BoboBrowse 3.1.0, which has several more features including a Geo Facet Search, Time Range Search, and Histogram Facet Search.

You can see it in action by running the CarDemo solution from the repo. Its not a great demo (also a port from the Java version), but it does the job.

That project falls into the same sort of category as MvcSiteMapProvider, since faceted search could be used for site navigation. I would appreciate if you could write a short blog and/or a tweet about it to see if we can get some beta testers on board.

@maartenba
Copy link
Owner Author

Consider it done, scheduled two tweets for today. Does it support plugging the storage engine?

@NightOwl888
Copy link
Collaborator

No, it is strictly a query engine. There are no special tools to build a search index - you just use Lucene.Net to do that.

The only thing to keep in mind is that the NumericField is not supported. BoboBrowse requires that numeric and date data be stored in the index as formatted strings. I created a document to explain this because it seems many people were attempting to use NumericField and it didn't work and the original documentation was not very clear on that point. I even built a small project in Java to test using the NumericField and it just crashes. Changing it to a formatted string padded with leading zeros works like a charm. I improved the error message to indicate that NumericField is not supported, since the original one didn't explain the problem.

It plugs into Lucene.Net by decorating an open IndexReader class with a BoboIndexReader. Generally, the BoboIndexReader was setup to be run as a singleton. If you add the BoboBrowse.Net.Spring package, you can use an XML file to configure browse facet handlers for each index, but in a nutshell, facet handlers are configured when the BoboIndexReader is created. There are runtime facet handlers in version 3.1.0 to get around that limitation when the facets contain information that is calculated on the fly.

On an open BoboIndexReader, you can then call Browse, passing in a BrowseRequest which can contain a lot of configuration about how to organize the results, the main parts being a set of BrowseSelections and a Lucene.Net Query.

There is a lot of code and most of it deals with sorting, grouping, and manipulating data types. In fact, there is about double the code there is in MvcSiteMapProvider. I have a sneaky suspicion that this entire library could be replaced by a couple of hundred lines of LINQ and proper use of Generics, but it is difficult to figure out how to separate the facet business logic from the sorting and grouping code, since both of them are using arrays and bit shifting. Working out how to do that and have it work seamlessly with Lucene.Net's clunky API would be much more work than simply porting something that is already working, even if it does mean bringing along a lot of extra features that are difficult to understand, not well documented, and may not even work properly in .NET. Fortunately, I have managed to get all of the tests to pass, which raises my confidence that at least the main part of the business logic works in .NET.

@maartenba
Copy link
Owner Author

Very nice! Will see if I can have a go with it.

@NightOwl888
Copy link
Collaborator

@maartenba

I am working on build scripts for 2 jQuery plugins, and one of them (the main one) has multiple packages. I want to deploy them with NuGet, Bower, and NPM.

After reading the docs on Bower, it seems that they only support 1 module per Git repo and use the tags to keep track of versioning. So, I opted to go with using submodules for each package using this approach. This solved another problem - how to keep the minified and regular scripts in sync.

I am planning to use MyGet to run the build. NuGet is working great. Bower (supposedly) will work off of the tags so there is no need to do a push. NPM on the other hand requires a push, and this is where I am getting tripped up.

Is it possible to configure the script to push all of the packages from the Git submodules to the current feed rather than using a feed per submodule, allowing a single "push upstream" command for all of the NPM packages, the same way we do it for multiple NuGet packages? If so, would the following command do that?

cd submodule\directory && npm push --registry https://www.myget.org/F/your-feed-name/npm

I found the documentation that shows that you can allow multiple users to push to NPM, but I am not sure if that is relevant here or if that is possible to do on a MyGet server without making passwords public.

@NightOwl888
Copy link
Collaborator

Never mind. I found the NPM pack command as specified in your documentation. It wasn't very clear that step was necessary for NPM. Once the pack is done, MyGet publishes to the feed just fine.

@NightOwl888
Copy link
Collaborator

On a side note, trying to update submodules on MyGet fails when using SSH. I switched to HTTPS and it works, but it is a bit awkward to work with the submodules locally because of the different authentication scheme.

@maartenba
Copy link
Owner Author

Yeah only HTTP(S) is supported. We were not keen on asking users to hand us their key files to be able to work on the build machine.

@NightOwl888
Copy link
Collaborator

I see. Still, it would be nice if there were a way to set it up as SSH for local use, but have MyGet switch to HTTPS through some sort of configuration (either using a file or through the control panel).

@NightOwl888
Copy link
Collaborator

@maartenba

I am attempting to push my first NPM package from MyGet to NPM's official registry. However, it doesn't seem to be working.

When I attempt to push, it says "some additional configuration required". It mentions an API key, but nowhere in the NPM documentation does it say anything about that.

Then I follow the link to "Package Sources" and try to enter my login credentials on the edit form for the "Npmjs.org" source, but it won't let me save. It says that "the package source specified is invalid" even though I left the URL the default of http://registry.npmjs.org/ (which I double checked against the NPM documentation). The form validation doesn't let me continue.

This is supposed to allow you to push upstream the same way you can with NuGet, isn't it? The documentation is a bit vague on that point. It seems to be focused on pushing a package onto the feed, but it isn't clear how to

  1. Install a package locally from the MyGet feed (to test it)
  2. Push a package to the official Npmjs.org feed

which are the only tasks I am interested in.

If you want to take a look, the feed I am working with is here: https://www.myget.org/feed/Packages/jquery-facebox

@maartenba
Copy link
Owner Author

Added myself to the feed. Seems the NPM package source is missing a username/password.

@NightOwl888
Copy link
Collaborator

Right. And how can I add them if it is not possible to save the NPM package source form?

@maartenba
Copy link
Owner Author

Does it show an error message next to the username or pwd box on save? (ping me via email)

@NightOwl888
Copy link
Collaborator

No, it says "the package source specified is invalid" next to the NPM URL (even though I didn't change it from the default). I presume the validation code has not been updated to allow this URL: http://registry.npmjs.org/.

@maartenba
Copy link
Owner Author

Can you email me?

@nrodriguez2001
Copy link

After installi SiteMap v5 and deploy to IIS we are getting an error like below:

Compilation Error
Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.

Compiler Error Message: CS0234: The type or namespace name 'VisualStudio' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?)

Source Error:

Line 4: @using System.Web.UI.WebControls
Line 5: @using Microsoft.AspNet.Identity
Line 6: @using Microsoft.VisualStudio.Web.PageInspector.Runtime
Line 7:

Any ideas to solve this issue?

@nrodriguez2001
Copy link

I got this error

System.StackOverflowException was unhandled
Message: An unhandled exception of type 'System.StackOverflowException' occurred in mscorlib.dll

running the MVC 5 version

@DilHem
Copy link

DilHem commented Mar 18, 2019

Hey All,
Its Urgent,

Can anyone provide me C language code for fuzzy logic speed controller. I'm doing this Using ENERGIA interface.
I'm taking my inputs using a ultrasonic sensor and after fuzzy methods the outputs are taken by DC motors as their inputs and then after they are going to control the speed or the rotation speed of DC motor.

I'm thankful if anyone can provide the C language code for this.

Thank You

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants