Note: Most of the Article of this blog has taken from another reputated blogs,Websites so Author will not responsible for any Issue.

Easy way to get a Google adsense account

Hey Guys...No need to wait 7 months for Google Adsense approval.Its amazing trick to get google adsense license.
Simple steps to get a Google adsense account.

  • Create an user account in www.indyarocks.com
  • Add a profile photo and upload more then 10 photos.
  • Photos should be real.
  • Write 2 unique blogs.
  • Create a new gmail account.
  • Now Signup for google adsense account.
  • Google will take 10-15 days for approval.
  • Google will send a confirmation mail regarding this.
  • In case of any issue Indyarocks will send you message.


--



Mangesh SinghSoftware Engineer
_______________________________________________________
Url: http://www.techtipshub.in Mobile: +91-9457874019, Email: mangeshgangwar@gmail.com, IM: mangeshgangwar@gmail.com(gtalk)


Dell Streak The versatile 5-inch Android tablet


I'm pumped to finally get a chance to blog about the upcoming Dell Streak tablet device. Since we firstpreviewed the Dell Streak at CES 2010, it's been making waves in the blogosphere ever since. The 5-inch tablet will launch first to customers in the United Kingdom in early June. Customers there will be able to purchase itacross the UK at O2 stores, O2.co.uk, The Carphone Warehouse and later this month on Dell.co.uk. Pricing and data plans for UK customers will be announced by O2 before availability. We plan to make the Dell Streak available to customers in the United States later this summer.
I've been at Dell for 16 years, and I don't think there's ever been more buzz around a single Dell product than this. In my view, that's for good reason. Hardware and design-wise, this thing impresses. Add the ever-increasing capability that Android brings to the equation, and you've got a mobile device that offers a ton of flexibility while looking cool in the process. The Dell Streak brings together a great web browsing experience, multi-tasking capability, slick turn by turn navigation and a great way to enjoy your photos, movies and music into a sleek device that's built for mobility.
Update: Here's a short video we uploaded to the Dell YouTube channel that provides a quick overview of what you can use Streak for:


The Dell Streak is a hybrid device that lives in the space between a smartphone and other larger tablets or netbooks that you might be using right now. We designed it to provide a wide range of users flexibility to do what they need with a mobile device. That's why we packed the Dell Streak with a lot of features. We'll utilize that flexibility via over-the-air updates for platform upgrades, Adobe Flash 10.1 on Android 2.2 later this year, plus other enhancements like video chat applications and more. 
After using the Dell Streak for a bit, one thing that really stands out in my opinion is the screen. The vivid, 5-inch diagonal display may seem only slightly larger than many of the smartphones making waves out there specs-wise. But when that larger screen is coupled with higher pixel density, it's surprising how much difference that extra inch and a half or so makes in everyday activities like browsing the web, playing games or watching video. Because it's made with Gorilla Glass the screen also has a pretty big durability advantage over more fragile mobile devices. Take a look at the Gizmodo hands-on to see what I mean. The Dell Streak is thin (10mm-which is  thinner than a lot of mobile devices out there), and though it's just a bit heavier than other smartphones, it feels solid and balanced-which makes using it pretty natural across a number of activities. We'll also offer all kinds of Dell Streak accessories like a car dock kit, battery replacements, a home AV dock and more.
The Dell Streak is a device designed for accessing entertainment, navigating your busy schedule and connecting you to the friends and family who matter to you. If you want dig into more details, check out this video interview with Kevin Andrew from the Dell Streak development team:


Hardware-wise, the Dell Streak features the following:
  • A sharp 5-inch capacitive multi-touch WVGA (800x480) display for a great full-screen experience watching video or browsing the web
  • Fast 1GHz Snapdragon ARM-based mobile processor from Qualcomm
  • 5 MP autofocus camera with dual LED flash that offers easy point & shoot capability and quick uploads toYouTubeFlickrFacebook and more
  • VGA front-facing camera enables video chat functionality down the road
  • A user-removable (and replaceable) battery
  • A 3.5mm headphone jack means many of you can use the Dell Streak as the music source (and more) in your car
  • Integrated 3G + Wi-Fi (802.11b/g) + Bluetooth 2.1 (think headsets, external keyboards, stereo headsets, etc.)
  • UMTS / GPRS / EDGE class 12 GSM radio with link speeds of HSDPA 7.2 Mbps / HSUPA
  • A user-accessible Micro SD slot expandable up to 32GB. That means you can store  lots of movies, music, photos or other kinds of files.
On the software side, here's what you can expect:
  • A customized multi-touch version of the Google Android operating system that features Dell user interface enhancements
  • Access to over 38,000  apps (and growing) via the Android Marketplace
  • Microsoft Exchange connectivity and integration through TouchDown
  • Google Voice support
  • Integrated Google Maps with voice-activated search, turn-by-turn navigation, street and satellite views
  • Quick access to activity streams via integrated social network app widgets like Twitter, Facebook, YouTube
Like an increasing number of our laptop and netbook products, Dell Streak will ship with cushions made fromcompostable bamboo.
More Dell Streak details will be coming in subsequent posts. In the meantime, feel free to leave comments or questions to this blog post, or follow the discussion on Twitter by using the #DellStreak hashtag.

Dynamic URLs vs. static URLs

Chatting with webmasters often reveals widespread beliefs that might have been accurate in the past, but are not necessarily up-to-date any more. This was the case when we recently talked to a couple of friends about the structure of a URL. One friend was concerned about using dynamic URLs, since (as she told us) "search engines can't cope with these." Another friend thought that dynamic URLs weren't a problem at all for search engines and that these issues were a thing of the past. One even admitted that he never understood the fuss about dynamic URLs in comparison to static URLs. For us, that was the moment we decided to read up on the topic of dynamic and static URLs. First, let's clarify what we're talking about:

What is a static URL? 
A static URL is one that does not change, so it typically does not contain any url parameters. It can look like this: http://www.example.com/archive/january.htm. You can search for static URLs on Google by typing filetype:htm in the search field. Updating these kinds of pages can be time consuming, especially if the amount of information grows quickly, since every single page has to be hard-coded. This is why webmasters who deal with large, frequently updated sites like online shops, forum communities, blogs or content management systems may use dynamic URLs.

What is a dynamic URL?
If the content of a site is stored in a database and pulled for display on pages on demand, dynamic URLs maybe used. In that case the site serves basically as a template for the content. Usually, a dynamic URL would look something like this: http://code.google.com/p/google-checkout-php-sample-code/issues/detail?id=31. You can spot dynamic URLs by looking for characters like: ? = &. Dynamic URLs have the disadvantage that different URLs can have the same content. So different users might link to URLs with different parameters which have the same content. That's one reason why webmasters sometimes want to rewrite their URLs to static ones.

Should I try to make my dynamic URLs look static?
Following are some key points you should keep in mind while dealing with dynamic URLs:

  1. It's quite hard to correctly create and maintain rewrites that change dynamic URLs to static-looking URLs.
  2. It's much safer to serve us the original dynamic URL and let us handle the problem of detecting and avoiding problematic parameters.
  3. If you want to rewrite your URL, please remove unnecessary parameters while maintaining a dynamic-looking URL.
  4. If you want to serve a static URL instead of a dynamic URL you should create a static equivalent of your content.
Which can Googlebot read better, static or dynamic URLs?
We've come across many webmasters who, like our friend, believed that static or static-looking URLs were an advantage for indexing and ranking their sites. This is based on the presumption that search engines have issues with crawling and analyzing URLs that include session IDs or source trackers. However, as a matter of fact, we at Google have made some progress in both areas. While static URLs might have a slight advantage in terms of clickthrough rates because users can easily read the urls, the decision to use database-driven websites does not imply a significant disadvantage in terms of indexing and ranking. Providing search engines with dynamic URLs should be favored over hiding parameters to make them look static.

Let's now look at some of the widespread beliefs concerning dynamic URLs and correct some of the assumptions which spook webmasters. :)

Myth: "Dynamic URLs cannot be crawled."
Fact: We can crawl dynamic URLs and interpret the different parameters. We might have problems crawling and ranking your dynamic URLs if you try to make your urls look static and in the process hide parameters which offer the Googlebot valuable information. One recommendation is to avoid reformatting a dynamic URL to make it look static. It's always advisable to use static content with static URLs as much as possible, but in cases where you decide to use dynamic content, you should give us the possibility to analyze your URL structure and not remove information by hiding parameters and making them look static.

Myth: "Dynamic URLs are okay if you use fewer than three parameters."
Fact: There is no limit on the number of parameters, but a good rule of thumb would be to keep your URLs short (this applies to all URLs, whether static or dynamic). You may be able to remove some parameters which aren't essential for Googlebot and offer your users a nice looking dynamic URL. If you are not able to figure out which parameters to remove, we'd advise you to serve us all the parameters in your dynamic URL and our system will figure out which ones do not matter. Hiding your parameters keeps us from analyzing your URLs properly and we won't be able to recognize the parameters as such, which could cause a loss of valuable information.

Following are some questions we thought you might have at this point.

Does that mean I should avoid rewriting dynamic URLs at all?
That's our recommendation, unless your rewrites are limited to removing unnecessary parameters, or you are very diligent in removing all parameters that could cause problems. If you transform your dynamic URL to make it look static you should be aware that we might not be able to interpret the information correctly in all cases. If you want to serve a static equivalent of your site, you might want to consider transforming the underlying content by serving a replacement which is truly static. One example would be to generate files for all the paths and make them accessible somewhere on your site. However, if you're using URL rewriting (rather than making a copy of the content) to produce static-looking URLs from a dynamic site, you could be doing harm rather than good. Feel free to serve us your standard dynamic URL and we will automatically find the parameters which are unnecessary.

Can you give me an example?
If you have a dynamic URL which is in the standard format like foo?key1=value&key2=value2 we recommend that you leave the url unchanged, and Google will determine which parameters can be removed; or you could remove uncessary parameters for your users. Be careful that you only remove parameters which do not matter. Here's an example of a URL with a couple of parameters:

www.example.com/article/bin/answer.foo?language=en&answer=3&sid=98971298178906&query=URL
  • language=en - indicates the language of the article
  • answer=3 - the article has the number 3
  • sid=8971298178906 - the session ID number is 8971298178906
  • query=URL - the query with which the article was found is [URL]
Not all of these parameters offer additional information. So rewriting the URL to www.example.com/article/bin/answer.foo?language=en&answer=3 probably would not cause any problems as all irrelevant parameters are removed. 

The following are some examples of static-looking URLs which may cause more crawling problems than serving the dynamic URL without rewriting:
  • www.example.com/article/bin/answer.foo/en/3/98971298178906/URL
  • www.example.com/article/bin/answer.foo/language=en/answer=3/
    sid=98971298178906/query=URL
  • www.example.com/article/bin/answer.foo/language/en/answer/3/
    sid/98971298178906/query/URL
  • www.example.com/article/bin/answer.foo/en,3,98971298178906,URL
Rewriting your dynamic URL to one of these examples could cause us to crawl the same piece of content needlessly via many different URLs with varying values for session IDs (sid) and query. These forms make it difficult for us to understand that URL and 98971298178906 have nothing to do with the actual content which is returned via this URL. However, here's an example of a rewrite where all irrelevant parameters have been removed:
  • www.example.com/article/bin/answer.foo/en/3
Although we are able to process this URL correctly, we would still discourage you from using this rewrite as it is hard to maintain and needs to be updated as soon as a new parameter is added to the original dynamic URL. Failure to do this would again result in a static looking URL which is hiding parameters. So the best solution is often to keep your dynamic URLs as they are. Or, if you remove irrelevant parameters, bear in mind to leave the URL dynamic as the above example of a rewritten URL shows:
  • www.example.com/article/bin/answer.foo?language=en&answer=3
We hope this article is helpful to you and our friends to shed some light on the various assumptions around dynamic URLs. Please feel free to join our discussion group if you have any further questions.

Free calling in Gmail extended through 2011

When we launched calling in Gmail back in August, we wanted it to be easy and affordable, so we made calls to the U.S. and Canada free for the rest of 2010. In the spirit of holiday giving and to help people keep in touch in the new year, we're extending free calling for all of 2011.

In case you haven't tried it yet, dialing a phone number works just like a regular phone. Look for "Call phone" at the top of your Gmail chat list and dial a number or enter a contact's name.


To learn more, visit gmail.com/call. Calling in Gmail is currently only available to U.S. based Gmail users.

Happy New Year and happy calling!

General SQL Server Performance Tuning Tips

When your transaction log grows large and you want a quick way to shrink it, try this option. Change the database recovery mode of the database you want to shrink from “full” to "simple," then truncate the log file by performing a full backup of the database, then switch back to the “full” recovery mode. By temporally changing from the Full recovery model to the Simple recovery mode, and then back, SQL Server will only keep the "active" portion of the log, which is very small. [7.0, 2000, 2005] Contributed by Tom Kitta. Updated 5-7-2007

*****
If you need to delete all the rows in a table, don't use DELETE to delete them, as the DELETE statement is a logged operation and can take a significant amount of time, especially if the table is large. To perform the same task much faster, use the TRUNCATE TABLE instead, which is not a logged operation. Besides deleting all of the records in a table, this command will also reset the seed of any IDENTITY column back to its original value.
After you have run the TRUNCATE TABLE statement, it is important then to manually update the statistics on this table using UPDATE STATISTICS. This is because running TRUNCATE TABLE will not reset the statistics for the table, which means that as you add data to the table, the statistics for that table will be incorrect for a time period. Of course, if you wait long enough, and if you have Auto Update Statistics turned on for the database, then the statistics will eventually catch up with themselves. But this may not happen quickly, resulting in slowly performing queries because the Query Optimizer is using out-of-date statistics. [6.5, 7.0, 2000, 2005] Updated 5-7-2007
*****
If you use TRUNCATE TABLE instead of DELETE to remove all of the rows of a table, TRUNCATE TABLE will not work when there are Foreign Key references present for that table. A workaround is to DROP the constraints before firing the TRUNCATE. Here's a generic script that will drop all existing Foreign Key constraints on a specific table:
CREATE TABLE dropping_constraints
(
cmd VARCHAR(8000)
)
INSERT INTO dropping_constraints
SELECT
'ALTER TABLE [' +
t2.Table_Name +
'] DROP CONSTRAINT ' +
t1.Constraint_Name
FROM
INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS t1
INNER JOIN
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE t2
ON
t1.CONSTRAINT_NAME = t2.CONSTRAINT_NAME
WHERE t2.TABLE_NAME='your_tablename_goes_here'
DECLARE @stmt VARCHAR(8000)
DECLARE @rowcnt INT
SELECT TOP 1 @stmt=cmd FROM dropping_constraints
SET @rowcnt=@@ROWCOUNT
WHILE @rowcnt<>0
BEGIN
EXEC (@stmt)
SET @stmt = 'DELETE FROM dropping_constraints WHERE cmd ='+ QUOTENAME(@stmt,'''')
EXEC (@stmt)
SELECT TOP 1 @stmt=cmd FROM dropping_constraints
SET @rowcnt=@@ROWCOUNT
END
DROP TABLE dropping_constraints
The above code can also be extended to drop all FK constraints in the current database. To achieve this, just comment the WHERE clause. [7.0, 2000] Updated 5-7-2007
*****
Don't run a screensaver on your production SQL Servers, it can unnecessarily use CPU cycles that should be going to your application. The only exception to this is the "blank screen" screensaver, which is OK to use. [6.5, 7.0, 2000, 2005] Updated 5-7-2007
*****
Don't run SQL Server on the same physical server that you are running Terminal Services, or Citrix software. Both Terminal Services and Citrix are huge resource hogs, and will significantly affect the performance of SQL Server. Running the administrative version of Terminal Services on a SQL Server physical server, on the other hand, is OK, and a good idea from a convenience point of view. As is mentioned in other parts of this website, ideally, SQL Server should run on a dedicated physical server. But if you have to share a SQL Server with another application, make sure it is not Terminal Services or Citrix. [7.0, 2000, 2005] Updated 5-7-2007
*****
Use sp_who or sp_who2 (sp_who2 is not documented in the SQL Server Books Online, but offers more details than sp_who) to provide locking and performance-related information about current connections to SQL Server. Sometimes, when SQL Server is very busy, you can't use Enterprise Manager or Management Studio to view current connection activity via the GUI, but you can always use these two commands from Query Analyzer or Management Studio, even when SQL Server is very busy. [6.5, 7.0, 2000, 2005] Updated 5-7-2007
*****
SQL Server 7.0 and 2000 use its own internal thread scheduler (called the UMS) when running in either native thread mode or in fiber mode. By examining the UMS's Scheduler Queue Length, you can help determine whether or not that the CPU or CPUs on the server are presenting a bottleneck.
This is similar to checking the System Object: Processor Queue Length in Performance Monitor. If you are not familiar with this counter, what this counter tells you is how many threads are waiting to be executed on the server. Generally, if there are more than two threads waiting to be executed on a server, then that server can be assumed to have a CPU bottleneck.
The advantage of using the UMS's Schedule Queue Length over the System Object: Processor Queue Length is that it focuses strictly on SQL Server threads, not all of the threads running on a server.
To view what is going on inside the UMS, you can run the following undocumented command:
DBCC SQLPERF(UMSSTATS)
For every CPU in your server, you will get Scheduler. Each Scheduler will be identified with a number, starting with 0. So if you have four CPUs in your server, there will be four Schedulers listed after running the above command, Schedulers ID 0 through 3.
The "num users" tells you the number of SQL threads there are for a specific scheduler.
The "num runnable," or better known as the "Scheduler Queue Length," is the key indicator to watch. Generally, this number will be 0, which indicates that there are no SQL Server threads waiting to run. If this number is 2 or more, this indicates a possible CPU bottleneck on the server. Keep in mind that the values presented by this command are point data, which means that the values are only accurate for the split second when they were captured, and will be always changing. But if you run this command when the server is very busy, the results should be indicative of what is going on at that time. You may want to run this command multiple time to see what is going on over time.
The "num workers" refers to the actual number of worker threads there are in the thread pool.
The "idle workers" refers to the number of idle worker threads.
The "cntxt switches" refers to the number of context switches between runnable threads.
The "cntxt switches(idle)" refers to the number of context switches to "idle" threads.

Denormalization in SQL Server for Fun and Profit

Almost from birth, database developers are taught that their databases must be normalized.  In many shops, failing to fully normalize can result in anything from public ridicule to exile to the company’s Siberian office.  Rarely discussed are the significant benefits that can accrue from intentionally denormalizing portions of a database schema.  Myths about denormalization abound, such as:

  • A normalized schema is always more stable and maintainable than a denormalized one.
  • The only benefit of denormalization is increased performance.
  • The performance increases from denormalization aren’t worth the drawbacks.
This article will address the first two points (I’ll tackle the final point in the second part of this series).  Other than for increased performance, when might you want to intentionally denormalize your structure?  A primary reason is to “future-proof” your application from changes in business logic that would force significant schema modifications.
Let’s look at a simple example.  You’re designing a database for a pizza store.  Each customer’s order contains one or more pizzas, and each order is assigned to a delivery driver.  In normal form, your schema looks like:
Table: Orders
Customer
Driver
Amount

Table: OrderItems
Order
Pizza Type
Planning Ahead.  Let’s say you’ve heard the owner is considering a new delivery model.  To increase customer satisfaction, every pizza will be boxed and sent for delivery the moment it comes out of the oven- even if other pizzas in the order are still baking. 
Since you’re a savvy developer, you plan for this and denormalize your data structure.  Thoughtoday, the driver column is functionally dependent only on the order itself, you cross your fingers, take a deep breath, and violate Second Normal Form by placing it in the OrderItems table.  There—you’ve just future-proofed your application.   Orders can now have multiple drivers. 
Your denormalization has introduced a small update anomaly (if an order’s driver changes, you have to update multiple rows, rather than just one) but if the probability of the delivery model change is large, this is well worth the cost.   This is typical when denormalizing, but usually it’s a small problem, and one that can be handled automatically via triggers, constraints, or other means..   For instance, in this case, you can create (or modify the existing) update SP for Orders to cascade the change into OrderItems.  Alternatively, you can create an UPDATE trigger on OrderItems that ensures all rows within one order have the same driver.  When the rule changes in the future, just remove the trigger—no need to update your tables or any queries that reference them.
Now let’s consider a slightly more complex (and somewhat more realistic) case.   Imagine an application to manage student and teacher assignments for an elementary school.    A sample schema might be:

Table: Teachers
Teacher (PK)
Classroom

Table: Students
Student (PK)
Teacher (FK)

Planning Ahead.  You happen to know that other elementary schools in the region are assigning secondary teachers to some classrooms.  You decide to support this in advance within your schema.  How would you do it via denormalization?  The ugly “repeating groups” solution of adding a “Teacher2” column is one solution, but not one that should appeal to you.    Far better to make the classroom itself the primary key, and move teachers to a child table:
Table: Classrooms
Classroom  (PK)
Teacher  (FK)

Table: Teachers
Teacher  (PK)
Classroom  (FK)

Table: Students
Student  (PK)
Classroom  (FK)

As before, this denormalization creates a problem we need to address.  In the future, the school may support multiple teachers in one classroom, but today that’s an error.   You solve that by the simple expedient of adding a unique constraint on the classroom FK in the teacher’s table.    When the business rule changes in the future, you simply remove the constraint.   Voila!  A far better solution than having to significantly alter your views, queries, and stored procs to conform to a new schema.

Understanding Common Type System in .NET Framework

       As .Net Framework is language independent and support over 20 different programming languages, many programmers will write data types in their own programming language.
For example, an integer variable in C# is written as int, whereas in Visual Basic it is written as integer. Therefore in .Net Framework you have single class called System.Int32 to interpret these variables. Similarly, for the ArrayList data type .Net Framework has a common type called System.Collections.ArrayList. In .Net Framework, System.Object is the common base type from where all the other types are derived.

      This system is called Common Type System. The types in .NET Framework are the base on which .NET applications, components, and controls are built. Common Type System in .Net Framework defines how data types are going to be declared and managed in runtime. The Common Type System performs the following functions: 

• Automatically adapts itself in a framework that enables integration of multiple languages, type safety, and high performance code execution.
• Provides an object-oriented model.
• Standardizes the conventions that all the languages must follow.
• Invokes security checks.
• Encapsulates data structures.


There are two general types of categories in .Net Framework that Common Type System support. They are value types and reference types. Value types contain data and are user-defined or built-in. they are placed in a stack or in order in a structure. Reference types store a reference of the value’s memory address. They are allocated in a heap structure. You can determine the type of a reference by the values of self-describing types. Reference types can be categorized into self-describing types, pointer types, or interface types.
There are many other types that can be defined under Value types and Reference types. In .Net Framework, the System namespace is the root for all the data types. This namespace consists of classes such as Object, Byte, String, and Int32 that represents base data types. These base data types are used by all applications. During runtime a type name can be classified into two: the assembly name and the type’s name within the assembly. The runtime in .Net Framework uses assemblies to find and load types....

How to configure dbmail in sql server 2005 and 2008

Step 1) Create Profile and Account:
You need to create a profile and account using the Configure Database Mail Wizard which can be accessed from the
Configure Database Mail context menu of the Database Mail node in Management Node.
This wizard is used to manage accounts, profiles, and Database Mail global settings which are shown below:





Step 3) Send Email:
After all configurations are done, we are now ready to send an email. To send mail,
we need to execute a stored procedure sp_send_dbmail and provide the required parameters as shown below:



How to decompile .net DLL or EXE

DIS# is better decompiler for .net.You can get back your code within few minutes by using dis#.
Before purchase dis# take a trial for this.If you need full version of with lifetime membership dis# then please send me a mail to at mangeshgangwar@gmail.com.
   
Features of dis#

  • Internal editor for quick editing


    Type new name and press Enter:

    The typical problem with decompilation is the absence of full source information in the executable file. For instance, .NET assembly does not contains names of local variables. Program can automatically assign local names in accordance with their types (what Dis# is really do), but it still too differentiates with the original source.

    Dis# makes next logical step in this direction. You can edit the names and keep the changes in a project file. ( see screenshot )
  • Download DIS# from Here    
  • Dis# project file

    Dis# have it's own metadata structure, which expands PE metadata structure with all necessary for decompilation information, such as local variable names. You can save Dis# metadata in the project file (extension .dis) and keep all changes.

  • Decompilation Speed

    Custom metadata provides outstanding decompilation speed, which 25-700 times faster then have other .NET decompilers. Dis# decompiles more then 2000 methods per second.

  • Multiple Languages decompilation

    Support for C#, Visual Basic.NET, Delphi.NET and Chrome.

  • Well formed code

    Dis# generates code, which is look like the human edited. Dis# .net decompiler have many options to adjust code view for your preferences.

  • Optimization

    Dis# optimize code.

  • .NET 2.0 support

    Dis# support .NET 2.0 assembly format, generics etc.

  • Raw Code

    In some cases you have to view raw code (before high level decompilation algorithms processing).


  • Download DIS# from Here      

Windows Communication Foundation


Windows communication Foundation a part of the .NET Framework that provides a unified programming model for building service-oriented applications that communicate across the web and the enterprise.   


Visual Studio 2010 and .NET Framework 4 are Here!Visual Studio 2010 and .NET Framework 4 are Here!
Visual Studio 2010 and .NET Framework 4 mark the next generation of developer tools from Microsoft. Check it out! 
 
  
Download from here
Get it
Get Started
Beginner's Guide
Learn more
Learn more
 
WS  Interoperability between WCF and Metro
Information for .NET Framework and Java Metro developers on creating standards-based Web Services that enable communication between the two platforms. Demonstrates interoperability across a range of WS-I and other scenarios.
 
New Web Service Interoperability with WCF page on MSDN
We've just released a new page on MSDN where you can get all kinds of information about WS-* interoperability with WCF - How-To White Papers, case studies, and news about Microsoft’s web service interoperability efforts.
Standards-Based Interoperability between SAP NetWeaver and Microsoft .NET Framework
Information for .NET Framework and SAP NetWeaver developers on creating standards-based Web Services that enable communication between the two platforms. Demonstrates simple ping/echo scenarios, as well as a complete end-to-end ERP Purchase order application.
A Developer's Introduction to Windows Communication Foundation (WCF) .NET 4
An overview of the most important new features and improvements in WCF, with enough technical detail and code to help you as a developer understand how to use them. Now updated for the release of .NET 4 and Visual Studio 2010. 
 

New Social API Web Application Toolkit for .NET Web Developers

Latest in this API

We've just published an update to our Web Application Toolkits! Here is a summary of web application toolkits:

Its totally free for .net developers.It help ASP.NET web developers complete common web development tasks and quickly add new features to your apps. Whether it's Bing Maps integration or adding social capabilities to your site, there's a toolkit for you. For the full list of Web Application Toolkits check out this website.

Summary:-

  1. Added a new Social API Web Application Toolkit
  2. Updated all the WATs to be compatible with Visual Studio 2010
Introducing the Social API Web Application Toolkit

image



As social networking Web sites are becoming more and more popular, users often want to access simultaneously the different networks they belong to from one only entry point. For example, one user might want to post the same message they are posting on your site also on Facebook, Twitter, MySpace and so on.

Although many of these social networks provide APIs for accessing their information, you might want to integrate your Web application with several social sites at the same time and be able to do this in a consistent manner, without having to go into numerous modifications in your code with each new social network that you want to incorporate.

This Web Application Toolkit provides a generic "Social Networks" API that allows connecting your Web application with different social networks and managing them through one entry point with a consistent set of methods. In the Toolkit you'll find examples of how to use the Social Networks API provided to connect a Web application with Facebook and Twitter, allowing you to manage the data provided by these networks in a generic way.

Please notice that this Toolkit includes examples only for reduced set of operations (mainly posting status updates) within the two social networks named before. Through this documentation you'll find instructions on how to extend this set into more operations and more social networks.

More Details:-

This toolkit comes with Facebook and Twitter Providers that allow you to perform tasks against different Social Network APIs in a common way.  For example, Facebook and Twitter do authentication in different ways which is a pain because you have to write different code for each network.  The Providers mean that you can call common methods and pass in which social networks you want to perform the action against – behind the scenes the providers call the appropriate methods against Facebook or Twitter to get the job done.  The provider model also makes it easy to extend the API to other Social Networks in the future – we've provided detailed instructions on how to do this in the documentation that comes with the toolkit download – it's in the "next steps" section.

  1. public ActionResult NetworkLogin(string providerName)
  2.         {
  3.             var social = new SocialProxy();
  4.             return this.Redirect(social.GetProvider(providerName).LoginUrl);
  5.         }

The code above shows how you use the SocialProxy class, included in the toolkit to get the login URL for the given social network.  In this example we then redirect the user to that URL instead of an MVC view in our application.

The Social Networks API checks if the user is already authenticated in the application and if not it authenticates him by using the FormsAuthentication.SetAuthCookie method. The API maintains a user repository with the account information for each user's social networks.

  1. public bool Login(string providerName, HttpContextBase httpContext)
  2.         {                        
  3.             var provider = this.GetProvider(providerName);
  4.             var identity = provider.GetLoginIdentity(httpContext);
  5.             
  6.             if (identity == null)
  7.             {
  8.                 return false;
  9.             }
  10.  
  11.             if (!httpContext.User.Identity.IsAuthenticated)
  12.             {
  13.                  var userId = this.usersRepository.FindIdentity(identity) ?? this.usersRepository.CreateUser(identity);
  14.                  FormsAuthentication.SetAuthCookie(userId, false);
  15.             }
  16.             else
  17.             {
  18.                  var userId = this.usersRepository.FindIdentity(identity);
  19.                  if (userId != httpContext.User.Identity.Name)
  20.                  {
  21.                      this.usersRepository.AssociateIdentity(httpContext.User.Identity.Name, identity);
  22.                  }
  23.             }
  24.  
  25.             return true;
  26.         }

Notice that new users do not need to create a separate account when registering on the Web application. The API stores the user's Facebook and Twitter account information, and creates a unique identifier on the user repository for that user to keep them associated.

The API stores the user internal identifier together with the login information for each of its associated social networks, by using the UsersRepository.AssociateIdentity method.