-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MvcSitemapProvider v4 #119
Comments
Not sure what it will get us by dropping those dependencies, but hey if you can do it - would be great. Features: I think more than features, there are many bugs/issues: Memory leak issues - #118 My latest code changes you just accepted fix this, so it should be closed: #102 Many 'issues' open that should be closed - but I think you are only one with rights to get these cleaned up. In general, just more activity and reviving of this great project! :-) |
Here is my wish list:
That said, I am not willing to wait until you get around to it to make a version I can use. I'd be happy to contribute what I come up with back into this project if there are no objections. I was actually looking into creating another sitemap provider to implement the fuctionality so I can reuse the UI and sitemaps XML functionality that is already here, but based on your comments about dropping the dependency on System.Web perhaps it would be best to start off with some kind of Sitemap Loader interface to handle the starting point where the provider would normally go? I am open to suggestions. Of course, I could just make a standard SiteMap provider that is just a shell that delegates its work to other injected classes and then later the SiteMap provider can be dropped without having a huge impact on the rest of the project. |
I’m working on some initial bits for vNext, if you want I’ll post them on a separate brancg. The idea there is to:
|
Sure, I'd like to see the latest and greatest. Thanks. |
See branch "v4". Do note that this is very much work in progress and that no final product (nor working application) is in there. It does give you some pointers of where I'm heading. |
I took a look and had a few thoughts/questions: My hope is to introduce a couple of seams into the provider 1.) to allow the tree (or a tree provider) to be injected by the host application and 2) to allow the configuration options to be injected by the host application. Naturally, it makes sense for them to both default to loading from the Mvc.sitemap file and the web.config file, respectively. It also looks like it makes sense to have a dynamic node provider (seam) that defaults to an assembly scanner. It looks like you created an interface named ISiteMapSource that would work great for the tree injection, but there is no way to inject a custom provider other than through the config file. I realize this is just prototyping, but what was the reasoning of using the ASP.NET sitemap provider as a base? That is what I had in mind as well until you mentioned you might do otherwise. Pros: Cons: Personally, I have no preference if the ASP.NET provider stays or goes, but clearly the project needs to go in a different direction if the provider is not used. Finally, how much time do you have to dedicate to this project? The reason I ask is I could either start working with this full time now or circle back around after about 3 weeks of working on other bits of my app and join in when you have more of a base to work with. Of course, if this project doesn't change much in that timeframe it doesn't make much sense for me to wait. |
I was thinking of building the provider in 3 stages.
That last stage is of concern to me: drop it or not? I would like to drop it but this breaks anything out there that uses sitemaps as the data source. I'll be traveling the next weeks so time will be limited, feel free to do some prototyping already. Earlier feedback is better imho :-) |
Agreed. Dropping the provider trashes any chance for a smooth upgrade. But then, it could also make the library easier to work with for newcomers. Looks like we are pretty much on the same page.
Just so you understand my end game, I am planning to make an end-user drag and drop tree interface that can be used to create/rearrange the site structure. There is a fully-functional sample of exactly what I would like to achieve here: http://www.bigace.de/demo.html. Have a look at the v3 administration interface and go to Pages > Administration from the menu. The pane on the left allows for creating and moving of pages. All changes are reflected in the main menu of the site and the site map path. Note the following: The tree isn't simply translated, there is a separate tree structure for each language. Anyway, I will get started working on a prototype. For now, I will assume the provider will stay, but my provider implementation will just be a shell that can be removed without too much trouble. I will try to carry forward your ideas, but my main focus will be in creating an implementation that can be easily integrated with a custom business layer and by extension, a custom data source. |
On 1.2: that's just a step which allows you to plug in between tree creation and tree usage.I was thinking to allow things like node optimization (e.g. for some nodes the url can be made static for all requests instead of having to resolve it all the time). It also allows you to restructure some things, or replace the entire tree with another one. Look at it as the last chance to do anything with the tree that applies to all requests from then on. But yes, seems we're on the same page. Looking forward to some prototypes! On a side note: I'm planning on keeping the current SiteMapXmlSiteMapSource to have at least some backward compatibility. Other than that feel free to cut the cords with the ASP.NET provider model. |
I do not see any real problem with dropping ASP.NET provider model support. The only things that used it were webforms based anyway. |
I am pursuing the path of dropping support for the ASP.NET provider. This turned out to be a pretty deep rabbit hole of rooting out necessary bits from the .NET framework, but it is clear that to gain control over the lifetime of the in-memory sitemap structure it is easier if it is not part of the provider. The next step is to try to make what I have function the same way it did before by delegating calls from an actual provider. However, although I have a fairly good idea what the dynamic nodes and assembly based (reflection) nodes are for at this point, there doesn't seem to be any samples for the reflection based nodes in the project and the samples provided for dynamic nodes are very basic. Do you have a working demo project you can provide so I can see how this works? I still have a fair amount of refactoring to do after getting the code functional, but once I have working code, keeping it in a working state should be much easier. |
Nothing I can share. The reflection basednodes can be created using one of the [MvcSiteMapNode] attributes, important is to specify their ParentKey property to build the tree structure. |
Yesterday, I finally got this to a point where it is functioning without using the ASP.NET provider. I ended up dropping the ASP.NET provider altogether because it was just getting in the way of seeing the big picture. A compatibility layer (builder) can still be added to load the data from an alternate source (such as an ASP.NET sitemaps provider) if needed. If you are interested in having a look, the latest can be found here: https://github.com/NightOwl888/MvcSiteMapProvider/tree/v4, though it isn't very pretty at this point, is poorly documented, and the localization code isn't yet fully implemented so nothing shows up for localized nodes. My vision is to have an object graph of smart business objects whose purpose is to maintain the hierarchial relationships between nodes. The outside world should have no direct access to this data - it should only have access to methods and properties that all go through the business logic layer within (or injected into) the object graph. If you are familliar with the CSLA framework, this is sort of what I have in mind for managing the behavior associated with the tree structure itself (without the dependeny on CSLA, that is). Here is an overview of these objects. SiteMap - Replaces SiteMapProvider and is essentially the root object of the graph. As for the rest of the code, I envision a set of services whose default implementation can easily be swapped by a custom one by simply creating an implementation and then changing the DI configuration to inject them. This will allow for a wide variety of use cases. As it turns out in a multi-tenant application, deciding when a request should belong to one tenant vs another is an application-specific thing. Therefore, the logic of when a new sitemap should be built and cached needs to be easily overridable by the application. I made a SiteMapLoader to handle this task, but as it stands it is still broken because it allows the UI to override the "current" sitemap in some cases but not others. It also doesn't have a very robust way to map a specific SiteMapBuilderSet named configuration to a given request. This is something I will address shortly. Semantics: Now some decisions need to be made to shape the direction from here. Some things jumped out at me as being unusual, but it is difficult to tell exactly what the underlying reasons for their being are. Other things just need to be put out there for discussion.
As for referencing IDependencyResolver, I just didn't see any need. Using constructor injection, all dependencies can be provided by the DI container directly, and doing so gives the DI container control over the lifetime of the objects. This has a distinct advantage here - if we have a project with 10,000 nodes and we want to inject a UrlResolver service into those nodes that contains custom logic, nothing is gained (but a lot of memory and/or CPU cycles lost) by creating 10,000 UrlResolver service instances using IDependencyResolver. On the other hand, using the DI container to supply these dependencies can ensure that only 1 instance is ever created regardless of how many nodes use it. To support cases where no DI container is used, I plan on making a basic object composer to do so that contains the logical defaults. It would read configuration values from specific AppSettings keys and use them to configure similar options to what are available now. It would just require a single line of code to be placed in the Application_Start method, such as Composer.Compose(). That said, I don't plan to spend that much time on this because it is not something I will need. A lot could be done here - custom configuration section for MvcSiteMapProvider, configuration for custom implementations of interfaces put into the config file, etc. However, being that this class would be totally replaced if a DI container were in use, you may want to encourage developers to use DI if they have a pressing need to override an implementation with another. Alternatively, this could be built out into an effective way to configure MvcSiteMapProvider without using a DI container. Ok, that turned out to be very long-winded. The sooner I get feedback on these points, the more likely the feedback is to influence the direction I go. Thanks. |
One other issue I have....is that many times I can have the same sitemap node in several different trees. |
Is this different that "Multiple paths to the same node" I have in my wish list above? This sounds like a similar scenario, or at least the way you describe it as "same sitemap node in many trees" sounds like the same solution I came up with before - creating multiple XML files and using session state to track the current one, overriding it to the "default" if anything but a certain set of pages were requested next chronologically. In short, my solution was to fix an issue with a many-to-many relationship between categories and products where one product could belong to many different categories. I plan to address this, but not until after a few more rounds of refactoring. I haven't thought of a way to do it without using some kind of user session based tracking, though. Well, there is using the URL querystring to track the path, but that is not SEO friendly at all and is not a path I plan to take. Let me know if you have any other ideas as to how this could be implemented. The main issue at hand is how to tell which path you are at in the map if a node is requested that exists in 2 different places? I guess another possible solution to this would be to actually put the same exact product on multiple unique URLs. This would force the burden of properly using the canonical meta tag onto the developer, but since this is something that is supported by both Google and Bing already this could potentially be a solution. |
Wow - the light bulb just came on. I never really considered making the canonical meta tag generation a part of the sitemap, but now that I think about it, it makes perfect sense to make a "CanonicalHelper" in the same way the "TitleHelper" works. |
@NightOwl888 - yes that is what is needed....a many to many relationship would be great. When it exists in two places and you need to 'find' where you are in the sitemap, I was thinking they would have a priority - so the FIRST item in the many-many relationship would be the one that it is in. So for your example if a product was in categoryA and categoryB...and you went to the product page...it would show categoryA as the parent - if category A was the 'first' link in the many-many relationship.....This could of course be an injected piece of code where users could decide some other logic based on their specific requirements. |
@waynebrantley - lets think about a side effect of that for a minute. If I were a customer browsing your site and first navigated to categoryB, the sitemap path would be something like "Home > CategoryB". Now I choose the product (lets just call it Product1) from the categoryB page. When the product page were displayed, it would show the sitemap path as "Home > CategoryA > Product1". It just jumped from categoryB to categoryA for no apparent reason. To me it feels like this behavior is broken and it is likely to cause confusion to the user, would you agree? This is where I took a different path to solve the issue before, and ended up swapping different XML structures altogether, keeping track of the "current" one in session state. 2 other solutions I can think of:
Anyway, as you can see I am leaning toward the second solution, but that isn't to say the first one can't be worked out as well. Some people probably wouldn't mind resorting to session state and swapping as long as the site had the exact semantics they wanted - that is, each product existing on exactly 1 URL. It is clearly more complicated that way, but as I said, I worked it out once before. |
Session state is not a realistic option in my opinion. |
Well, stateful doesn't necessarily mean session state. Using a cache or a static member with a user-specific ID is one option or storing the state in a cookie is another. Of course, in the end they all use cookies to manage the user ID anyway. Realistically all that is needed is a short string identifier to track the ID of the cached sitemap the user last navigated to, so it would not be very expensive in terms of memory. A cookie might be a good fit in this case. I am just trying to avoid storing extra information in the URL because that would be the kiss of death for SEO. Viewstate is not an option because the information we need is lost going from page to page. There is another option, but it is not very reliable in my experience - that is to use the URL referrer header to determine the last position the user navigated from rather than keeping it in state. Unfortunately, not all browsers reliably send this value and some may have different rules about whether to send it based on whether or not you are navigating within the same domain. It might be worth a shot, though - maybe there is a way to make it more reliable. In your opinion, is the use of multiple URLs for the same product with a handy tool to manage the canonical tag on the extra URLs a realistic option? |
Yes, I think that is realistic. |
I would make it a small pluggable component. The idea with v3 was to use specify this scenario in your action method by fiddling with the sitemap structure directly although having it stored in a cookie by default seems more logical to me. (note "by default", would be great to be able to plug in one that runs on top of session state / database / whatever the developer prefers) |
Actually, after doing some research, it seems that the best approach would be a combination of the ideas I have been toying with. According to http://forums.asp.net/p/1580897/3985044.aspx, a common approach to the problem is to simply tack on extra position info to the URL. In MVC terms, that would just be an extra parameter in the route. This would normally be a problem for SEO. However, if there were also a way to manage the canonical tag using a similar component to the TitleHelper and a logical way to choose the primary one in the sitemap node/XML structure, then this wouldn't be so much of an issue. Essentially, this tool could be used to tell the search engines that the URL with the extra positioning info is equal to the URL without it, and then all the pieces fit, no state is necessary. Then it would not really matter if you want to use the same URL with different query string parameters or multiple URLs for the page - this choice can be made by the developer at whim. One thing is pretty clear, the sitemap is a good place to manage the canonical tag data to make it virtually automatic. There is only 1 way to declare the canonical tag and this behavior is shared across all web sites. |
I am now several rounds of refactoring in, but have still not tackled the fact that HttpContext being passed around as API parameters isn't a testable approach. I see that Microsoft's answer was to create an abstract HttpContextBase with a similar interface, with the concrete type HttpContextWrapper to fill its shoes. However, I also noticed that there are not one, but 2 overridden HttpContextWrapper classes in this project - HttpContext2 and HttpContextMethodOverrider. It appears as if the HttpContext2 implementation is supposed to cover the general case for MvcSiteMapProvider as it is used in every place in the project but one, is this a correct assumption? However, the other implementation (along with its partner, HttpRequestMethodOverrider) is a bit of an enigma. It overrides the HttpMethod member of the request object, but only in the case where it is instantiated with an HttpMethod passed to its constructor (second parameter) that is not null. However, the only place it is used in the project, it is declared like this: HttpContextBase httpContext = new HttpContextMethodOverrider(context, null); So, in effect we have a general HttpContextWrapper class that is exactly the same as Microsoft's implementation. I know there must have been some reasoning behind this, but it wasn't left behind in the code. If you can explain this, please do. Anyway, I am thinking about taking the route of creating an injected abstract factory that can both create a generic HttpContextWrapper and wrap it in one or both of the other layers if necessary (probably with a decorator pattern). However, both of these classes desperately need new names to clarify their intent. I am thinking about going with either MvcHttpContext or MvcSiteMapHttpContext for HttpContext2 (I am not sure which level it applies to or if it even covers the general case). It isn't clear whether the other one should just be swapped for HttpContextWrapper and scrapped or if there is some future use for it. One more thing. Typing "new HttpContextWrapper(HttpContext.Current)" or similar every time we need it passed into the SiteMap API is not exactly brief or intuitive. So should the external interface be changed to HttpContextBase, or should there be a method that accepts HttpContext that calls a factory internally to wrap HttpContext and then cascades the call to a protected method that accepts HttpContextBase? I am leaning toward the first approach, as in my book less code is better and the primary user of this will be the UI layer in the same project so the non-intuitive API is not such a big deal. |
The HttpContext2 implementation covers one fix for a closed property in HttpContextWrapper so yes, that assumption is correct. HttpContext2 provides us with the option of having a HttpRequest2 returned for the Request property, which in turn provides us with the possibility to implement our own AppRelativeCurrentExecutionFilePath and CurrentExecutionFilePath properties, required for having UrlBuilder generate the correct URL for a given set of RouteData even when that set of RouteData is not active as the current requested URL. The *Overrider classes do something similar but good question, I think they can be merged with the *2 classes. I think we should be using HttpContextWrapper everywhere but have it instantiated either on demand if some custom properties are required for some things, or via the IoC container which can provide us with new HttpContextWrapper(HttpContext.Current) Thoughts? |
For a while I have been considering just removing HttpContext from the API altogether and passing it in as a constructor argument only to the classes that directly use it. However, there are some methods where the intent can be made less ambiguous if the parameter is included. For example, I made a method that is used to create a sitemap cache key that is used to determine whether the current request should access a cached sitemap or store a new one. Without being passed HttpContext, it is difficult to understand how the method should be implemented. Presently, there are only 3 methods each in the interfaces of ISiteMap and ISiteMapNode where HttpContext is explicitly passed in. Technically speaking, these could all be eliminated. In fact, neither of these classes use HttpContext directly - it is simply passed along to other services that use it. The question is, would Node.IsAccessibleToUser() be any less clear than Node.IsAccessibleToUser(HttpContextBase)? Or more properly, is there any realistic reason to pass in a different HttpContext than the current one? For places where HttpContext.Current is grabbed out of thin air instead of being passed through the API as a parameter, I agree that it should be passed as a constructor argument that is injected by the DI container. |
It would not be less clear, both are viable imho |
// From ISiteMap
ISiteMapNode FindSiteMapNode(string rawUrl);
ISiteMapNode FindSiteMapNode(HttpContext context);
ISiteMapNode FindSiteMapNode(ControllerContext context);
ISiteMapNode FindSiteMapNodeFromKey(string key);
bool IsAccessibleToUser(HttpContext context, ISiteMapNode node); One thing is for sure, FindSiteMapNode() with no parameter is ambiguous. I suppose in the spirit of things it could be renamed FindSiteMapNodeFromCurrentContext() to make it clear. Thoughts? // From ISiteMapNode
RouteData GetRouteData(HttpContextBase httpContext);
bool IsVisible(HttpContext context, IDictionary<string, object> sourceMetadata);
bool IsAccessibleToUser(HttpContext context); GetRouteData() also seems like it might not make sense without context being passed in, or alternatively being renamed to GetRouteDataFromCurrentContext(). I also discovered that this method uses the context parameter directly internally. It matches the signature of the GetRouteData() method from RouteTable.Routes. Should probably leave that one alone, as it already uses HttpContextBase. However, it looks like the other methods still make perfect sense without the extra fluff. On the other hand, I have this sinking feeling that Microsoft had some master plan that I am not aware of and there was some better reason for those context parameters being passed than you or I can come up with in a few minutes. |
Their main idea was to move from System.Web to System.Web.Abstractions to make things testable. No idea for all the passing around though :) |
The only thing I can come up with is continuity. Without the parameter, there would be no way for the presentation or UI layers to choose the specific context to pass along to the service that uses it. Using DI, a strategy pattern or similar could be used to choose a configured context object, but it would still require a parameter to choose a unique instance. That said, there are a few methods and properties (from Microsoft) that simply grab HttpContext.Current out of the air and don't give the UI layer a choice. Specifically, they all call IsAccessibleToUser(). I think that one (or 2, rather) is safe to change. IsVisible cascades the call to the visibility provider. That one is safe to change, I think - assuming nobody would ever want to pass anything other than HttpContext.Current. The only one up in the air is FindSiteMapNode(HttpContext). It just doesn't feel right calling a Find() method without passing any criteria. I think renaming it might make it OK as long as the name is clear where the criteria is coming from. I guess if anyone screamed that they need a way to pass a different context, an additional method could be added to the interface later. Naming Conventions Speaking of names, I was also thinking that it might be a better choice in the DI world to name the "Default" concrete class according to the conventions that many DI containers use. For example, with StructureMap nothing extra needs to be configured if you have an interface named ISomething and a class named Something - the container already knows implicitly which one is the default. Using a naming convention where all of the default concrete instances is prefixed with "Default" is supported, but requires extra configuration. Naming the default instance after the interface is supported out of the box, and from what I have read, some other DI containers do this as well. Thoughts? |
Hmm.. Nuget.org was up when I checked a few minutes before your post, and it appears to be up now. Are you referring to the API that you use to push the packages? But, just like 2 weeks ago, it has been several minutes since I pushed to NuGet, the packages are not there, and I didn't get any email indicating that there were failures. Just - nothing. It seemed like MyGet was having outages 2 weeks ago so I didn't make an issue of it, but now it looks like a pattern. |
Oops, I spoke too soon. One of the packages just showed up on NuGet. |
Anyway, I would hardly call displaying failed status on the UI for a few minutes before displaying success on the same build "failing silently". Now that I know it can do this I can probably work with it, but it caught me off guard because I didn't expect the status to change from failed to success a couple of minutes later. |
Check status.nuget.org - they had a mayor DNS outage the past hour. |
It looks like Microsoft made an out of band security patch that causes inconsistent build behavior between machines that have it installed and those that don't. In particular, they updated both MVC 3 and 4 from 3.0.0.0 > 3.0.0.1 and 4.0.0.0 > 4.0.0.1. The MVC 4 update worked after some tweaking, but I am unable to get the MVC 3 build to work both on my local environment and the MyGet server. Details are in my post and the included links. I was trying to get a patch out, but it seems that the only options to do so are
Since this is a required Windows update that fixes a critical issue, going forward seems like it would be the better choice. But then the MyGet server may have other broken projects as well. Thoughts? |
Isn't that just a NuGet dependency update for mvc3 and 4? For 2 we can install it. |
Unfortunately not. For whatever reason, the NuGet package won't work on a computer that doesn't have the update installed. We have always used the local MVC 2 and 3 references and NuGet packages for MVC 4 and 5. I don't think that using the compiled library will be a problem to reference with the new version (NuGet will install the assembly redirects into web.config as appropriate), but building the library is (at least for MVC 3). After taking some time to soak this in, I suppose that some check could be added to the MSBuild project file (or the build script) to dynamically switch between the MVC 3.0.0.0 in the GAC if it exists and 3.0.0.1 using NuGet if it doesn't. The main issue is that the security update completely removes 3.0.0.0 to eliminate any possibility that someone can still use it (and force them to apply the security patch), but apparently it makes some other change to the system (perhaps a assembly redirect policy) to enable 3.0.0.1 to build. |
Ok, I did a little experimenting, and it turns out that using a condition will work to get the build to work on both machines that have the update and those that don't. <!-- Due to the windows update MS14-059, we need this hack to ensure we can build MVC3 both on machines that have the update and those that don't -->
<Reference Condition=" Exists('$(windir)\Microsoft.NET\assembly\GAC_MSIL\System.Web.Mvc\v4.0_3.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll') " Include="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" />
<Reference Condition=" !Exists('$(windir)\Microsoft.NET\assembly\GAC_MSIL\System.Web.Mvc\v4.0_3.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll') " Include="System.Web.Mvc, Version=3.0.0.1, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL">
<Private>True</Private>
<HintPath>..\packages\Microsoft.AspNet.Mvc.3.0.20105.1\lib\net40\System.Web.Mvc.dll</HintPath>
</Reference> But the clincher is that if the build is done with the update, no machine without the update can use it. If the build is done on a machine with the update, then it can be used whether the update is installed or not, and the update will automatically migrate it to the patched version via assembly redirects. So the bottom line is, we can get away with this hack until the MyGet server is upgraded. At that point, the upgrade will be required for anyone who uses MvcSiteMapProvider.MVC3.Core. Also, any developer who does their own build will need to follow the same criteria - if they build on a machine with the update, then all applications that use that build require the update. This only applies to users of MVC 3, for all other versions this upgrade is seamless. |
BTW - Although the build clearly failed, MyGet showed that it succeeded with 2 packages. I am not sure if this is because the build script is returning a non-failure status or because MyGet isn't picking up a failure. |
Which branch/commits have you been doing this on? Will see if I can come up with something. |
I have already pushed it through to release. I am not certain if you mean the problem with MyGet reporting a success or the fact that it seems that this MVC 3 hack is required in order to build consistently. I have no problem with backing out this hack if there is a solution that works better. But since the MyGet build server has not had the Windows Update applied yet (I think), there shouldn't be any difference between the current version and the previous one in terms of assembly bindings. Actually, with this hack in place it won't be possible for us to tell by the behavior of the build when the server is patched. It would be nice to inform anyone using MVC 3 that an MVC upgrade is required when the time comes that the build server is upgraded. The build failure happened on commit 2087df5a8deb731aa30c49128e810b38b9f20977, before I applied the above solution in commit 498feab5cf3c0c45cca4423e65beb83c5281addb, if you want to take a crack at it. I have been testing locally on 2 machines - one with the Windows Update applied and one without. |
Thanks Shad. On the uild success/failure: MyGet currently only fails when no packages are produced. There's an option underway to decide on when a build should fail. |
I have recently created a port of the BoboBrowse (Java) faceted search engine in .NET. It is now in beta, available on NuGet, all unit tests are passing, and could use some more testing to ensure it is stable. Note that there is a port of BoboBrowse 2.x on codeplex, but I couldn't get the owner of the repo to cooperate in order to include a NuGet package, so I had to host it myself. That version is also not compatible with the latest version of Lucene.Net (3.0.3), and not documented, but this one is. It is a port of BoboBrowse 3.1.0, which has several more features including a Geo Facet Search, Time Range Search, and Histogram Facet Search. You can see it in action by running the CarDemo solution from the repo. Its not a great demo (also a port from the Java version), but it does the job. That project falls into the same sort of category as MvcSiteMapProvider, since faceted search could be used for site navigation. I would appreciate if you could write a short blog and/or a tweet about it to see if we can get some beta testers on board. |
Consider it done, scheduled two tweets for today. Does it support plugging the storage engine? |
No, it is strictly a query engine. There are no special tools to build a search index - you just use Lucene.Net to do that. The only thing to keep in mind is that the NumericField is not supported. BoboBrowse requires that numeric and date data be stored in the index as formatted strings. I created a document to explain this because it seems many people were attempting to use NumericField and it didn't work and the original documentation was not very clear on that point. I even built a small project in Java to test using the NumericField and it just crashes. Changing it to a formatted string padded with leading zeros works like a charm. I improved the error message to indicate that NumericField is not supported, since the original one didn't explain the problem. It plugs into Lucene.Net by decorating an open IndexReader class with a BoboIndexReader. Generally, the BoboIndexReader was setup to be run as a singleton. If you add the BoboBrowse.Net.Spring package, you can use an XML file to configure browse facet handlers for each index, but in a nutshell, facet handlers are configured when the BoboIndexReader is created. There are runtime facet handlers in version 3.1.0 to get around that limitation when the facets contain information that is calculated on the fly. On an open BoboIndexReader, you can then call Browse, passing in a BrowseRequest which can contain a lot of configuration about how to organize the results, the main parts being a set of BrowseSelections and a Lucene.Net Query. There is a lot of code and most of it deals with sorting, grouping, and manipulating data types. In fact, there is about double the code there is in MvcSiteMapProvider. I have a sneaky suspicion that this entire library could be replaced by a couple of hundred lines of LINQ and proper use of Generics, but it is difficult to figure out how to separate the facet business logic from the sorting and grouping code, since both of them are using arrays and bit shifting. Working out how to do that and have it work seamlessly with Lucene.Net's clunky API would be much more work than simply porting something that is already working, even if it does mean bringing along a lot of extra features that are difficult to understand, not well documented, and may not even work properly in .NET. Fortunately, I have managed to get all of the tests to pass, which raises my confidence that at least the main part of the business logic works in .NET. |
Very nice! Will see if I can have a go with it. |
I am working on build scripts for 2 jQuery plugins, and one of them (the main one) has multiple packages. I want to deploy them with NuGet, Bower, and NPM. After reading the docs on Bower, it seems that they only support 1 module per Git repo and use the tags to keep track of versioning. So, I opted to go with using submodules for each package using this approach. This solved another problem - how to keep the minified and regular scripts in sync. I am planning to use MyGet to run the build. NuGet is working great. Bower (supposedly) will work off of the tags so there is no need to do a push. NPM on the other hand requires a push, and this is where I am getting tripped up. Is it possible to configure the script to push all of the packages from the Git submodules to the current feed rather than using a feed per submodule, allowing a single "push upstream" command for all of the NPM packages, the same way we do it for multiple NuGet packages? If so, would the following command do that?
I found the documentation that shows that you can allow multiple users to push to NPM, but I am not sure if that is relevant here or if that is possible to do on a MyGet server without making passwords public. |
Never mind. I found the NPM pack command as specified in your documentation. It wasn't very clear that step was necessary for NPM. Once the pack is done, MyGet publishes to the feed just fine. |
On a side note, trying to update submodules on MyGet fails when using SSH. I switched to HTTPS and it works, but it is a bit awkward to work with the submodules locally because of the different authentication scheme. |
Yeah only HTTP(S) is supported. We were not keen on asking users to hand us their key files to be able to work on the build machine. |
I see. Still, it would be nice if there were a way to set it up as SSH for local use, but have MyGet switch to HTTPS through some sort of configuration (either using a file or through the control panel). |
I am attempting to push my first NPM package from MyGet to NPM's official registry. However, it doesn't seem to be working. When I attempt to push, it says "some additional configuration required". It mentions an API key, but nowhere in the NPM documentation does it say anything about that. Then I follow the link to "Package Sources" and try to enter my login credentials on the edit form for the "Npmjs.org" source, but it won't let me save. It says that "the package source specified is invalid" even though I left the URL the default of http://registry.npmjs.org/ (which I double checked against the NPM documentation). The form validation doesn't let me continue. This is supposed to allow you to push upstream the same way you can with NuGet, isn't it? The documentation is a bit vague on that point. It seems to be focused on pushing a package onto the feed, but it isn't clear how to
which are the only tasks I am interested in. If you want to take a look, the feed I am working with is here: https://www.myget.org/feed/Packages/jquery-facebox |
Added myself to the feed. Seems the NPM package source is missing a username/password. |
Right. And how can I add them if it is not possible to save the NPM package source form? |
Does it show an error message next to the username or pwd box on save? (ping me via email) |
No, it says "the package source specified is invalid" next to the NPM URL (even though I didn't change it from the default). I presume the validation code has not been updated to allow this URL: http://registry.npmjs.org/. |
Can you email me? |
After installi SiteMap v5 and deploy to IIS we are getting an error like below: Compilation Error Compiler Error Message: CS0234: The type or namespace name 'VisualStudio' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?) Source Error: Line 4: @using System.Web.UI.WebControls Any ideas to solve this issue? |
I got this error System.StackOverflowException was unhandled running the MVC 5 version |
Hey All, Can anyone provide me C language code for fuzzy logic speed controller. I'm doing this Using ENERGIA interface. I'm thankful if anyone can provide the C language code for this. Thank You |
As it's been a while since ASP.NET MVC 4 is out and seeing that ASP.NET Web API dropped dependencies on System.Web, I'm kind of looking at evolving MvcSitemapProvider in that direction too...
vNext would:
That's the technical side. What about features? I would love to hear what you are using today, what you would not want to miss, what's currently missing, anything!
The text was updated successfully, but these errors were encountered: