Thursday, December 20, 2007

The EJB Query Language (EJB-QL) -Part 3

The EJB Query Language (EJB-QL)
Part -3
EJB-QL also contains the following built-in functions:
CONCAT(String, String) combines two strings into one and returns a String.
SUBSTRING(String, start, length) cuts a String into a smaller String, beginning at start and being length long.
LOCATE(String, String [, start]) returns an int denoting where a String is located within another String. You can use the optional start parameter to indicate where to begin locating.
LENGTH(String) gives you a string's length, returned as an int.
ABS(number) returns the absolute value of a number, which can be an int, float, or double.
SQRT(double) takes the square-root of a number and returns it as a double.

Dealing with collections
Normally if you want to use collections in the WHERE clause, you should declare those collections as variables in the FROM clause. For example, the following is invalid:

SELECT OBJECT(l)
FROM Order AS o
WHERE o.lineItems.product.name = 'chip'

The above is invalid because we are trying to reference a variable from a collection. The following is the correct way to write this EJB-QL:

SELECT OBJECT(l)
FROM Order AS o, IN(o.lineItems) l
WHERE l.product.name = 'chip'

The two special exceptions to this rule are when you use the EMPTY or MEMBER conditional expressions, shown in Table X.X. In these cases, you can use collections in the WHERE clause.

Performing comparisons
Sometimes you may need to declare more than one variable that represents the same entity bean. When you are performing comparisons this comes in very handy. For example:

SELECT OBJECT(p1)
FROM Product p1, Product p2
WHERE p1.quantityInStock > p2.quantityInStock AND
p2.name='Pentium 866'

The above query finds all products that have a greater quantity in-stock than a Pentium 866 chip.
The SELECT clause
The EJB-QL SELECT clause specifies the return results of a query. To understand why we need the SELECT clause, consider the following query, which returns all orders that contain line-items:
SELECT OBJECT(o)
FROM Order AS o, IN(o.lineItems) l
In this query, we have defined two variables in the FROM clause: o and l. The SELECT clause is necessary because it affirms that we want to return o (and not l) to the client who called the query.

How to traverse relationships
The SELECT clause can traverse relationships. For example, the following query returns all the products in all the orders that contain line-items:
SELECT l.product FROM Order AS o, IN(o.lineItems) l

As you can see, we can use the convenient dot-notation to traverse relationships in the SELET clause. Behind the scenes, a SQL JOIN statement might occur.

If you've been paying careful attention, you may have noticed that in the earlier example we wrapped a variable o with the phrase OBJECT(), but in this example, we didn't use the phrase OBJECT() at all. The EJB-QL rule is that you only wrap your return result with the phrase OBJECT() if you are returning a standalone variable that does not traverse a relationship using the dot-notation.

How to deal with collections
Let's say we want to find all line-items on all orders. We are thus asking for a collection of return results. Unfortunately, the following SELECT clause will not work:

SELECT o.lineItems
FROM Order AS o

The reason the above doesn't work is because SELECT clauses may only return single variables, not collections. To get around this restriction, you need to define a variable in the FROM clause. The following demonstrates this as a legal way to find all line-items on all orders:

SELECT OBJECT(l)
FROM Order AS o, IN(o.lineItems) l

How to filter for duplicates
You can control whether SELECT clauses return duplicates. For example, take our previous EJB-QL query that finds all products in all order line-items:

SELECT l.product FROM Order AS o, IN(o.lineItems) l

The above query may return duplicate products, because two different people may have ordered the same product. To get a unique list, you must apply the DISTINCT filter, as follows:

SELECT DISTINCT l.product FROM Order AS o, IN(o.lineItems) l

Another choice you have is to declare your finder or select method to return a java.util.Set, which may not contain duplicates compared to a java.util.Collection. If you use a java.util.Set, then both of the above EJB-QL statements would return the same unique results.

How to control what gets returned in finders
EJB-QL queries return results differently depending on how the client initiates the query. For example, take the following finder queries (thrown exceptions omitted):

// declared on the home interface
public java.util.Collection findAllProducts();

// declared on the local home interface
public java.util.Collection findAllProducts();

We want EJB objects to be returned for the first query, and EJB local objects to be returned for the second query. The EJB-QL code in the deployment descriptor for both of these query methods could be:
findAllProducts

What's great here is that we wrote our EJB-QL once, yet we can reuse it for both the home interface and local home interface. The container will automatically wrap the return results in an EJBObject or EJBLocalObject, respectively. These are the only possible types you can return from a finder query.

How to control what gets returned in selects
With finder methods, the container knows whether the results of a finder should be EJB objects or EJB local objects, because the container could look at whether the query was defined on the home interface or local home interface, respectively. But what about ejbSelect() methods?

Consider the following ejbSelect():
public abstract java.util.Collection ejbSelectAllProducts();

Here, we define the ejbSelect() method on the entity bean class, which doesn't give the container any information about whether our query should return EJB objects or EJB local objects. How does the container know what objects to wrap around the results?

To get around this, EJB requires that you setup a special stanza in the deployment descriptor to inform the container about whether the results should be local or remote objects:
ejbSelectAllProducts
Local
The above code will cause the ejbSelect() method to return a collection of EJB local objects. If you want the results to be a collection of EJB objects, then change the result-type-mapping element to have the value Remote.

Finally, note that ejbSelect() methods can also return container-managed fields.
For example:
public java.lang.String ejbSelectProductName();

Finder methods cannot return container-managed fields because finder methods operate remotely and at the granularity of entity beans, not parts of entity beans.

Truth Tables
Let's wrap-up our EJB-QL lesson with a look at the truth tables for how the operations AND, OR, and NOT evaluate. In the tables, the case of unknown means expressions that produce an unknown result, such as the clause:
WHERE NULL IN ('Intel', 'Sun')

AND

True

False

Unknown

True

True

False

Unknown

False

False

False

False

Unknown

Unknown

False

Unknown


Table X.X The AND truth table.



OR
True

False

Unknown

True

True

True

True

False

True

False

Unknown

Unknown

True

Unknown

Unknown

Table X.X The OR truth table.







Table X.X The NOT truth table.

NOT

True

False

False

True

Unknown

Unknown

Monday, December 10, 2007

The EJB Query Language (EJB-QL) -Part 2

The EJB Query Language (EJB-QL)

Part -2
EJB-QL Syntax
An EJB-QL query contains three parts:
1. A required SELECT clause
2. A required FROM clause
3. An optional WHERE clause

We now discuss the intimate details of each of these clauses. We'll do the SELECT clause last because that indicates the return results of a query.

The FROM clause
The FROM clause constricts the domain of a query. It indicates what part of the data storage you are querying over--that is, what entity beans you are going to be looking at. In the case of a relational database, the FROM clause would typically restrict which tables you are querying over. For example, the following FROM clause means we are only looking at Order entity beans:

SELECT OBJECT(o)
FROM Order AS o

What we're doing here is declaring a variable in the FROM clause. We are creating a variable, o, which can be used later in the query. In this case, we are re-using that variable in the SELECT clause. You can also re-use that variable in the WHERE clause.

Note that declaring variables will restrict your queries even if you don't use the variables. For example:

SELECT OBJECT(o)
FROM Order AS o, Customer AS c

The above query finds all orders that have customers. Even though we aren't using the variable c anywhere else, we are still excluding orders without customers.

Finally, you should note that the phrase AS is optional and is merely syntactic sugar to help make the query look better. This query produces the same result as the previous one:

SELECT OBJECT(o)
FROM Order o, Customer c

Declaring collection variables
Sometimes you need to declare variables in the FROM clause that represent a collection of values. For example, let's say we want to find all of the line-items which are attached to orders. The following query achieves that:

SELECT OBJECT(l)
FROM Order AS o, IN(o.lineItems) l
The above EJB-QL declares two variables.
· The phrase Order AS o declares a variable o that represents any order entity bean.
· The phrase IN(o.lineItems) l declares a variable l that represents any line-item linked off any order bean.
Thus, you use the AS syntax when declaring a variable representing a single value, and the IN syntax when declaring a variable representing a collection of values. And since the evaluation order is left-to-right, you can use variables on the right that were declared on the left.

Variables only represent one value at a time
Next, consider the following query, which returns all line-items that are attached to orders which are attached to customers:

SELECT OBJECT(l)
FROM Customer AS c, IN(c.orders) o, IN(o.lineItems) l

Notice the phrase o.lineItems. While o is a collection variable, it only represents one element of that collection at a time. Thus it is perfectly legal to use the phrase o.lineItems because in that phrase, o represents an individual order, not a collection of orders.

The WHERE clause
The EJB-QL WHERE clause restricts the results of a query. It is where you choose the values you want from the declared variables in the FROM clause. The general syntax of the WHERE clause is "WHERE ". For example:
SELECT OBJECT(o)
FROM Order o
WHERE o.lineItems IS NOT EMPTY

The above query finds all orders that have line-items.

Handling Input Parameters
When performing a query, often times you'll want to query based upon parameters supplied by the client.
For example, to implement the following finder method which finds a product based on a description:
findProductByDescription(String s)
A WHERE clause can be used as follows:
SELECT OBJECT(p)
FROM Product p
WHERE p.description = ?1
Here, ?1 represents the first parameter passed-in. Additional parameters would be numbered as ?2, ?3, and so-on. Note that you don't need to use all variables declared in the finder/select method.

Conditional Expressions
There are many conditional expressions that are built-in to EJB-QL. Here is the complete list.@@@COMP: The CODE is inside the TB@@@









Conditional Expression
Example

Notes

Mathematical operations:
+, -, *, /
Comparison operations:
=, >, >=, <, <=, <> (not equal)
Logical operators:
NOT, AND, OR

Find all products that are computer chips and whose profit margin is positive:

SELECT OBJECT(p)
FROM Product p
WHERE (p.description = "chip") AND (p.basePrice - p.cost > 0)

· Two entity beans are equal if and only if they share the same primary key value.
· You cannot compare two different entity bean classes.
Between expressions
Find all products whose price is at least 1000 and at most 2000:

SELECT OBJECT(p)
FROM Product p
WHERE p.basePrice BETWEEN 1000 AND 2000

Can also use NOT BETWEEN to return all data that is not between two values.

In expressions

Find all products whose manufacturer is either Intel or Sun:

SELECT OBJECT(p)
FROM Product p
WHERE p.manufacturer IN ('Intel', 'Sun')

Can also use NOT IN to return all data that is not in a range.

Like expressions

Find all products with ids that begin with '12' and end with '3'. For example, '123' or '12993' qualifies, but not '1234':

SELECT OBJECT(p)
FROM Product p
WHERE product.productID LIKE '12%3'

Find all products with ids that begin with '123' and are a total of four characters long. For example, '123c' qualifies, but not '14' nor '12345':

SELECT OBJECT(p)
FROM Product p
WHERE product.productID LIKE '123_'

· % stands for any sequence of zero or more characters
· _ stands for a single character
· You can represent the literal % or _ character by using special escape sequences (see the EJB spec for more)
· You can also use NOT LIKE to achieve the opposite effect

Null comparison expressions

Find all products that have NULL descriptions:

SELECT OBJECT(p)
FROM Product p
WHERE product.description IS NULL

You can also use NOT NULL to find all data that has non-NULL values.

Empty collection comparison expressions

Find all orders that have no line-items:

SELECT OBJECT(o)
FROM Order o
WHERE o.lineItems IS EMPTY

· You can also use IS NOT EMPTY to find valid collections.
· In this special case, you can declare collections in the WHERE clause rather than declaring them as variables first in the FROM clause

Collection member expressions

Find all line-items that are attached to orders:

SELECT OBJECT(l)
FROM Order o, LineItem l
WHERE l MEMBER OF o.lineItems

· The word OF is optional
· In this special case, you can declare collections in the WHERE clause rather than declaring them as variables first in the FROM clause
· Can also use NOT MEMBER OF to locate data where elements are not members of collections

Table X.X EJB-QL Conditional Expressions
Note that you can have more than one conditional expression, and use parenthesis to denote order of execution. Your container may provide proprietary extensions to these conditional expressions as well, perhaps in a separate deployment descriptor.

Friday, November 30, 2007

The EJB Query Language (EJB-QL) - Part 1

The EJB Query Language (EJB-QL)

Part -1
In this article, we will fully understand the syntax and semantics of the EJB Query Language (EJB-QL), which is the language that you use to describe query methods for container-managed persistent entity beans.

Overview
EJB-QL is a standard and portable language for expressing container-managed persistent entity bean query operations. These entity bean query operations can include finder methods (used by external entity bean clients), as well as select methods (used internally by the entity bean itself). EJB-QL is not necessary for bean-managed persistence because the bean provider writes the database access code, which is integrated into the entity bean class itself.

EJB-QL is a new addition to EJB 2.0. Before EJB 2.0, you would need to explain to the container how to implement your query operations in a proprietary way.

For example, you might bundle a container-specific flat-file with your bean. This flat-file would not be portable to other containers, which is very annoying for bean-providers who wish to write components that are container-agnostic.

Throughout this appendix, we will use an E-Commerce object model to illustrate EJB-QL, using such entity beans as orders, line-items, products, and customers. We designed that object model in Chapter X.

A simple example
Let's kick things off with a simple EJB-QL example. Take the following entity bean remote finder method:
public java.util.Collection findAvailableProducts() throws FinderException, RemoteException;
This finder method means to find all products that are currently in-stock.

The following EJB-QL in the deployment descriptor will instruct the container on how to generate the database access code that corresponds to this finder method:
...


Product
examples.ProductHome
examples.Product
examples.ProductBean
Container
examples.ProductPK
False

2.x
Product


inventory

...more container-managed persistent fields...



findAvailableProducts




0]]>



...

In the code above, we are putting together a query that resembles SQL or ORQL (see Chapter X for more on OQL). We can refer to entity beans inside of the EJB-QL by using that entity bean's abstract-schema-name defined earlier in the deployment descriptor. We can also query its container-managed fields or container-managed relationships, or other entity beans.

In fact, if we're using a relational database, the container will translate this EJB-QL code into SQL code in the form of JDBC statements.

The following SQL is an example of what might be generated depending on your container implementation:

SELECT DISTINCT p.PKEY
FROM PRODUCT p
WHERE p.INVENTORY > 0

The above SQL returns primary keys (not rows) back to the container. The container than wraps those primary keys in EJB objects and returns RMI-IIOP stubs to the client who called the finder method. When the client calls business methods on those stubs, the EJB objects will intercept the call, and then the ejbLoad() method will be called on the entity beans.

The container will then load the actual rows from the database. Note that this process may be optimized depending on your container implementation.

The power of relationships
The big difference between EJB-QL and SQL is that EJB-QL allows you to traverse relationships between entity beans using a dot-notation. For example:

SELECT o.customer
FROM Order o

In the above EJB-QL, we are returning all customers that have placed orders. We are navigating from the order entity bean to the customer entity bean easily using a dot-notation. This is quite seamless.

What's exciting about this notation is that bean providers don't need to know about tables or columns, rather they merely need to understand the relationships between the entity beans that they've authored. The container will handle the traversal of relationships for us because we declare our entity beans in the same deployment descriptor and ejb-jar file, empowering the container to manage all of our beans and thus understand their relationships.

In fact, you can traverse more than one relationship. That relationship can involve container-managed relationship fields and container-managed persistent fields. For example:

SELECT o.customer.address.homePhoneNumber
FROM Order o

The restriction on this type of recursive relationship traversal is that you are limited by the navigatability of the relationships that you define in the deployment descriptor. For example, let's say that in the deployment descriptor, you declare that orders have a 1-to-many relationship with line-items, but you do not define the reverse many-to-1 relationship that line-items have with orders. When performing EJB-QL, you can then get from orders to line-items, but not from line-items to orders. For more about how to define these types of relationships, see Chapter X in next article.

Tuesday, November 20, 2007

Virtual Functions in C++

C++ virtual function is a member function of a class, whose functionality can be over-ridden in its derived classes.
The whole function body can be replaced with a new set of implementation in the derived class. The concept of c++ virtual functions is different from C++ Function overloading.

C++ Virtual Function - Properties:

C++ virtual function is,
  • A member function of a class
  • Declared with virtual keyword
  • Usually has a different functionality in the derived class
  • A function call is resolved at run-time

The difference between a non-virtual c++ member function and a virtual member function is, the non-virtual member functions are resolved at compile time.

This mechanism is called static binding. Where as the c++ virtual member functions are resolved during run-time. This mechanism is known as dynamic binding.

C++ Virtual Function - Reasons:
The most prominent reason why a C++ virtual function will be used is to have a different functionality in the derived class.

For example a Create function in a class Window may have to create a window with white background. But a class called CommandButton derived or inherited from Window, may have to use a gray background and write a caption on the center. The Create function for CommandButton now should have a functionality different from the one at the class called Window.

C++ Virtual function - Example:
This article assumes a base class named Window with a virtual member function named Create. The derived class name will be CommandButton, with our over ridden function Create.


class Window // Base class for C++ virtual function example
{

public : virtual void Create()

// virtual function for C++ virtual function example
{

cout <<"Base class Window";

}

}
class CommandButton : public Window
{

public:void Create()
{

cout<<"Derived class Command Button - Overridden C++ virtual function"

y = new CommandButton();

y->Create();

}

The output of the above program will be,
Base class Window
Derived class Command Button

If the function had not been declared virtual, then the base class function would have been called all the times. Because, the function address would have been statically bound during compile time. But now, as the function is declared virtual it is a candidate for run-time linking and the derived class function is being invoked.

C++ Virtual function - Call Mechanism:
Whenever a program has a C++ virtual function declared, a v-table is constructed for the class. The v-table consists of addresses to the virtual functions for classes and pointers to the functions from each of the objects of the derived class.

Whenever there is a function call made to the c++ virtual function, the v-table is used to resolve to the function address. This is how the Dynamic binding happens during a virtual function call.


Friday, November 2, 2007

JSP Model

JSP Model 1 and Model 2 Architectures

The early JSP specifications presented two approaches for building web applications using JSP technology. These two approaches were described in the specification as JSP Model 1 and Model 2 architectures.
Although the terms are no longer used in the JSP specification, their usage throughout the web tier development community is still widely used and referenced.

The two JSP architectures differed in several key areas.
The major difference was how and by which component the processing of a request was handled. With the Model 1 architecture, the JSP page handles all of the processing of the request and is also responsible for displaying the output to the client.
This is better seen in Figure 1-1.

Figure 1-1. JSP Model 1 Architecture

Notice that in Figure 1-1 there is no servlet involved in the process, the client request is sent directly to a JSP page, which may communicate with JavaBeans or other services, but ultimately the JSP page selects the next page for the client.
The next view is either determined based on the JSP selected or parameters within the client’s request.

In direct comparison to the Model 1 approach, in the Model 2 architecture, the client request is first intercepted by a servlet, most often referred to as a Controller servlet.
The servlet handles the initial processing of the request and also determines which JSP page to display next.
This approach is illustrated in Figure 1-2.


Figure 1-2. JSP Model 2 Architecture

As you can see from Figure 1-2, in the Model 2 architecture, a client never sends a request directly to a JSP page. The controller servlet acts as sort of a traffic cop. This allows the servlet to perform front-end processing like authentication and authorization, centralized logging, and possibly helps with Internationalization.

Once processing of the request has finished, the servlet directs the request to the appropriate JSP page. How exactly the next page is determined can vary widely across different applications, for example, in simpler applications, the next JSP page to display may be hard coded in the servlet based on the request, parameters, and current application state. In other more sophisticated web applications, a workflow/rules engine may be used.

As you can see, the main difference between the two approaches is that the Model 2 architecture introduces a controller servlet that provides a single point of entry and also encourages more reuse and extensibility than Model 1. With the Model 2 architecture, there is also a clear separation of the business logic, presentation output, and request processing.

This separation is often referred to as a Model-View-Controller (MVC) pattern. While the Model 2 architecture might seem overly complicated, it can actually simplify an application greatly. Web applications built using the Model 2 approach are generally easier to maintain and can be more extensible than comparable applications built around the Model 1 architecture.

All of this doesn’t mean that applications built using the Model 1 approach are incorrectly designed. The Model 1 architecture might be the best decision for smaller applications that have simple page navigation, no need for centralized features, and are fairly static.

However, for larger enterprise-size web applications, it would be more advantageous to utilize the Model 2 approach.

Tuesday, October 30, 2007

Variable Argument Functions in C++

The argument values can be retrieved by using the va_arg, va_start and va_end macros.

These macros assume that the function will be called with a fixed number of required parameters and variable number of optional parameters.

The following sample program uses a function Add with variable arguments and returns the value of the added items.

#include
#include

int Add(int a,int b,...){
//This one handles 4 arguments in total.
int l_ParamVal=0;
int total=0;
int i;
//Declare a va_list macro and initialize it with va_start
va_list l_Arg;va_start(l_Arg,a);

//The required parameters can also be accessed directly
l_ParamVal = a;
printf("%d\n",l_ParamVal);

if(l_ParamVal != -1)
total = total +l_ParamVal;
l_ParamVal = va_arg(l_Arg,int);
printf("%d\n",l_ParamVal);

if(l_ParamVal != -1)total = total +l_ParamVal;

l_ParamVal = va_arg(l_Arg,int);
printf("%d\n",l_ParamVal);

if(l_ParamVal != -1)
total = total +l_ParamVal;
l_ParamVal = va_arg(l_Arg,int);
printf("%d\n",l_ParamVal);

if(l_ParamVal != -1)
total = total +l_ParamVal;
va_end(l_Arg);
return total;

}
void main()
{
printf("Total of C++ Variable Arguments: %d\n",Add(2,3,4));
}

The above sample takes some integer parameters and returns their added result. This function can handle up to 4 parameters.

Monday, October 15, 2007

Inline Functions in C++

When a function is declared inline, the function is expanded at the calling block.

The function is not treated as a separate unit like other normal functions.

But a compiler is free to decide, if a function qualifies to be an inline function. I

f the inline function is found to have larger chunk of code, it will not be treated as an inline function, but as like other normal functions.

Inline functions are treated like macro definitions by the C++ compiler.

They are declared with the keyword inline as follows.
//Declaration for C++ Tutorial inline sample:

int add(int x,int y);

//Definition for C++ Tutorial inline sample:
inline int add(int x,int y)
{
return x+y;
}

In fact, the keyword inline is not necessary. If the function is defined with its body directly and the function has a smaller block of code, it will be automatically treated as inline by the compiler.

As implied, inline functions are meant to be used if there is a need to repetitively execute a small block of code, which is smaller.
When such functions are treated inline, it might result in a significant performance difference.

Wednesday, October 10, 2007

Enhancements of Visual Studio.Net 2005

In this article, we will come across in brief the different enhancement that has been incorporated in the new release of Microsoft - Visual Studio .net 2005 code named Whidbey, which is touted to have major improvements in the software development experience and productivity.

Compared to the previous versions of .net that were released, in the new version, Microsoft has added Express versions for each of the languages C#, VB, C++, J# along with ASP .Net and SQL Server 2005 separately. All these versions of Express editions are freely downloadable and can be used for learning and development.
Below is a brief look at some of the feature enhancements in the new version.

Visual basic:
Visual basic IDE has been enhanced with a lot of features to improve developer productivity. Microsoft has tried reducing the pains of syntax errors by making the IDE more intuitive while development thereby easing the task of coding for both new and experienced programmers.

The Visual Basic IDE also includes the ability to edit code and continue to run without restarting the program.

Reduction in syntax errors by intuitive Intelli-senseDebugging support without restarting the applications, thereby reducing the time required for debugging cycle.

Improved Exception handling mechanisms. Error reporting with clear messages, paving the way for clear detection of errors.

Better upgrade support from old Visual basic 6.0 apps, There are many other productivity enhancements, including the MyServices abstraction. MyServices is a series of coded shortcuts that make it easier to find system and application resources.

For example, code such as My.Computer and My.WebServices are programmatic shortcuts to system resources and Web service references respectively.

Visual C++:
Visual C++ will offer expanded support for the CLR and the .NET Framework.
Better optimization facilities when compared with other .Net languages namely Profile Guided Optimizations (POGO).

This POGO technology enables the compiler to instrument an application and collect details on how an application is being used at runtime. This will be used by Visual C++ to optimize the code generated based on the real world patterns.

This is supported for 64 bit apps currently and will also be available for 32 bit apps. This is available as a part of the Build Menu in Visual Studio .Net.

Availability of 64 bit compilers targeting support for both Intel and AMD based hardware.
It offers enhanced support for Standard Template Library. STL is tuned for interacting with both managed code and data.

Support for a new category of type called Handle. Handles are pointers but use the Carat (^) symbol for access.

Visual C#:
C# Rapid App Development gets a major boost with .net framework 2.0 and Visual Studio 2005. The new features generics, iterators, anonymous methods, partial types and refactoring are some of the items which deserve to be highlighted.

Generics are a parallel for C++ templates. They allow high level of code reuse and further speed up the process of Software development.Anonymous methods are dynamic methods which need not be pre-defined. They can be defined at the point of need. They can be used in place of event handler delegates.

Another huge boost could be the support for Refactoring. Refactoring is one among the basic tenets of Test Driven Development. Visual C# 2005 automates this feature by providing support for this feature.

Visual Studio 2005 delivers a long-requested feature, which is the ability to correct programming errors during debugging and continue to run without restarting the program.

The C# IDE includes a suite of tools that automate many common refactoring code tasks.
Developers can easily rename classes, fields, properties, and methods, extract code into its own method, reorder or delete parameters to a method, promote a local variable to be a parameter, encapsulate fields, and perform many other refactoring tasks.

The tools ensure that when any change is made, all dependant modules are also updated.Web Development :

Apart from the above the support for Web development is also enhanced to a great extent.
Support for integrated database development with SQL Server 2005Personal Web starter kit enabling easier web application development, multi browser support.

Thursday, September 20, 2007

Java FX Script

The JavaFX Script programming language (known as JavaFX) is a declarative, statically typed scripting language from Sun Microsystems, Inc. As mentioned on the Open JavaFX (OpenJFX) web site, JavaFX technology has a wealth of features, including the ability to make direct calls to Java technology APIs. Because JavaFX Script is statically typed, it also has the same code structuring, reuse, and encapsulation features; such as packages, classes, inheritance, and separate compilation and deployment units, which make it possible for you to create and maintain very large programs using Java technology.

This article will help you get started with the JavaFX programming language is an introduction to the JavaFX programming language, targeted to those who are already familiar with Java technology and the basics of scripting languages.

The JavaFX Pad Application
If you have a Java Runtime Environment (JRE) on your system, the easiest way to get started with JavaFX technology is to fire up the Java Web Start-enabled demonstration program, JavaFX Pad. Once you start the application, you should see a screen similar to what appears in Figure 1

Figure 1. The JavaFX Pad Application Running on Microsoft Windows OS, JDK 6

JavaFX Pad starts with a default application already loaded, which it immediately executes. The JavaFX Pad application is a great way to see exactly what you're doing at runtime, modifying changes as you go, and instantaneously seeing the results

JavaFX Technology: A Statically Typed Language

The JavaFX programming language is a scripting language with static typing. What exactly does this mean? Consider the following:

var myVariable = "Hello";

This declaration, similar to what you may find in JavaScript technology, creates a variable called myVariable and assigns it the string value Hello. However, after declaring the variable, let's try to assign it something other than a string:

myVariable = 12345;

Because the code does not use quotation marks around 12345, this variable is now being assigned an integer instead of a string. In JavaScript technology, dynamically retyping the variable will work fine. However, a statically typed language such as JavaFX will not allow this. This is because myVariable was initially declared as a String type, and the code later tries to reassign it as an integer. With JavaFX, a variable that is declared as a String must remain a String.

In fact, if you enter those two lines of code into the JavaFX Pad demo, you'll immediately see an error at the bottom of the window, as shown in Figure 2.

Thursday, September 13, 2007

Using EJB with AJAX

AJAX

AJAX is an acronym for Asynchronous JavaScript And XML. AJAX is not a new programming language, but simply a new technique for creating better, faster, and more interactive web applications. AJAX uses JavaScript to send and receive data between a web browser and a web server.

It makes web pages more responsive by exchanging data with the web server behind the scenes, instead of reloading an entire web page each time a user makes a change. AJAX is a technology that runs in your browser. It uses asynchronous data transfer (HTTP requests) between the browser and the web server, allowing web pages to request small bits of information from the server instead of whole pages. It makes Internet applications smaller, faster and more users friendly.

What is it?
A traditional web application will submit input (using an HTML form) to a web server. After the web server has processed the data, it will return a completely new web page to the user.
Because the server returns a new web page each time the user submits input, traditional web applications often run slowly and tend to be fewer users friendly.

With AJAX, web applications can send and retrieve data without reloading the whole web page. This is done by sending HTTP requests to the server (behind the scenes), and by modifying only parts of the web page using JavaScript when the server returns data.
XML is commonly used as the format for receiving server data, although any format, including plain text, can be used.

The standard and well-known method for user interaction with web-based applications involves the user entering information (e.g. filling out a form), submitting that information to the server, and awaiting a page refresh or redirect to return the response from the server.
This is at times frustrating for the user, besides being rather different to the 'desktop' style of user interface with which (s)he may be more familiar.
Ajax (Asynchronous Javascript And XML) is a technique (or, more correctly, a combination of techniques) for submitting server requests 'in the background' and returning information from the server to the user without the necessity of waiting for a page load.
Ajax is actually a combination of several technologies working together to provide this capability.
How does it work?
Instead of a user request being made of the server via, for example, a normal HTTP POST or GET request, such as would be made by submitting a form or clicking a hyperlink, an Ajax script makes a request of a server by using the Javascript XMLHTTPRequest object.
Although this object may be unfamiliar to many, in fact it behaves like a fairly ordinary javascript object. As you may well know, when using a javascript image object we may dynamically change the URL of the image source without using a page refresh. XMLHTTPRequest retrieves information from the server in a similarly invisible manner.
AJAX in EJB:

Enterprise applications can use Ajax to provide better Web interfaces that increase the user productivity. In many cases, it is possible to submit a partially completed form to obtain useful information from the server application. For example, the server could perform some early validation or it could use the partial user input to suggest values for the empty form fields, speeding up the data entry process. Ajax can also be used to connect to data feeds whose information is displayed without refreshing the whole page.

The following diagram depicts the EJB-AJAX application's architecture:





Thursday, August 23, 2007

WSDL and UDDI

Understanding WSDL and UDDI
Web Services Description Language (WSDL) is one of the prime specifications in web services, the other two being SOAP and UDDI. WSDL is the description language for web services that describes a set of SOAP messages and how these messages are exchanged across network. WSDL will be in XML format; therefore it can be easily understood and edited by humans and machines.

Another advantage of WSDL being in XML format is that it is programming language independent and also platform independent. In addition, WSDL defines where the web service is available from and what communications protocol has been used to talk to the web service. As a result the WSDL file describes everything that is required to write a program for an XML Web service.

There are tools available in Microsoft Visual Studio .NET to read a WSDL file and generate the code required to communicate with an XML Web service.

Universal Discovery Description Language (UDDI) is a directory where you can expose your web services for other users to easily access it. You can also consume the web service that is already posted in UDDI. However, you can also post a web service without registering it in UDDI.

DISCO is another directory where you can post your web service. But if you want to reach to maximum of customers, you can place it in UDDI. The UDDI directory offers three parts for you to register:

• White Pages
• Yellow Pages
• Green Pages

The white pages consist of the description such as name and address of the company offering the service.

The yellow pages consist of industrial categories based on standard taxonomies such as North American Industry Classification System and Standard Industrial Classification.

The green pages describe the interface to the web service in detail so that anyone can write an application after using the web service.

Web services are described in UDDI directory through a document called Type Model or tmodel. Normally, this tModel contains a WSDL file that describes a SOAP interface to an XML Web service, but the tModel is flexible enough to describe almost any kind of web service.

Apart from using the web services from UDDI, you can also search a particular web service in UDDI. In addition, you can search for companies’ information that posted web services. In certain times, you might know the names of the companies that offer web services but you may not be aware of the web services that they offer.

The WS Inspection is a specification in UDDI that allows you to search for a collection of web services that are located in a particular company name. You can evaluate these web services according to your requirements.

Wednesday, August 8, 2007

WI-FI

Why / Where Should We Use Wi-Fi
The Wi-Fi LAN has a broad application nowadays. Because of the comfortable and quick installation people often replace old wired LANs with Wi-Fi. Such connection allows to move your machine around the place without losing the Internet or other network resources. Working on your laptop you can check your mail from anywhere in your home or office.

Some highly attended places like airports, libraries, schools or even coffee bars offer you constant Internet connection using exactly wireless LAN, so retrieving new files, cruising the global network or corresponding with others will not be a problem anymore in those (and many other) places.

The most important shortcoming in Wi-Fi is the range. So far we may have difficulties in making a connection with a receiver which is more than 50-75 meters away (inside the buildings).

The signal should be stronger to provide larger connectable spaces. Additionally, some of the wireless adapters works on the frequencies that are currently used by many other wireless devices. It can cause a serious interference, so the connection performance can be quite poor.

However, building Wi-Fi network is often the cheapest way to achieve the desired connection with the surroundings. The price of a single wireless adapter is decreasing almost every day, so making a large network area by means of Wi-Fi is the most reasonable way. You will not need to arrange all the wires around and profit by the installation time. By the way, most of the Wi-Fi adapters have user-friendly configuration and diagnostic tools which can help you to adjust or change your WLAN settings or even can do everything for you.

Security:
What about the security? Is there a possibility of stealing our data? Security is your personal decision, but having a wireless connection we should pay attention to protect our private files and encrypt sent messages.

Actually, the security modules were very important since the beginning of the Wi-Fi projection. In order to prevent intercepting your data by the others the designers implemented many security techniques, like Wi-Fi Protected Access (based on encryption), Virtual Private Network (making virtual "tunnels"), Media Access Control Filtering (rejecting unknown MAC addresses), RADIUS Authentication and Authorization (using login and password) or Kerberos (key distribution).

There is also a possibility to combine some of these security mechanisms making your transmissions even more secure.On the other hand providing such security in public places (like Internet cafes) may not meet its expectations. Connecting to protected wireless network you will be asked about a security code, encryption key or a password. If you do not know them, you will not be able to establish a communication link and use Internet resources.

Most of public areas do not use security modules because of that reason making Wi-Fi users data unsafe.

Monday, July 30, 2007

ADO.NET 2.0 – Part 2

In the previous articles I have discussed 8 new features of ADO.NET 2.0, following are the remaining.

9. DataReader’s New Execute Methods
Now command object supports more execute methods. Besides old ExecuteNonQuery, ExecuteReader, ExecuteScaler, and ExecuteXmlReader, the new execute methods are ExecutePageReader, ExecuteResultSet, and ExecuteRow.
Figure 2 shows all of the execute methods supported by the command object in ADO.NET 2.0.


Figure 2. Command's Execute methods.

10. Improved Performance for DataSet Remoting
If we think about ADO.NET 1.x DataSet, the major problem which is DataSet Serialization. Microsoft has worked lots on this part and they have improved the performance of Serialization a lot. In ADO.NET 1.x, Serialization of DataSet will happen in XML format. Even in ADO.NET 2.0, by default it happens in XML format. But there is an option to change the Serialization format to Binary using property called "SerializationFormat". Look at the following code.

Dim format As New Binary.BinaryFormatter
Dim ds As New DataSet ds = DataGridView1.DataSource
Using fs As New FileStream("c:\sar1.bin", FileMode.CreateNew)
ds.RemotingFormat = SerializationFormat.Binary
'Other option is SerilaizationFormat.XML
format.Serialize(fs, ds)
End Using

In the above code snippet, we are serializing the dataset into filestream. If we look at the file size difference between XML and Binary formating, XML formating is more than three times bigger than Binary formating. If we see the perfomance of Remoting of DataSet when greater than 1000 rows, the binary formating is 80 times faster than XML formating.

11. DataSet and DataReader Transfer
In ADO.NET 2.0, we can load DataReader directly into DataSet or DataTable. Similarly we can get DataReader back from DataSet or DataTable. DataTable is now having most of the methods of DataSet. For example, WriteXML or ReadXML methods are now available in DataTable also. A new method "Load" is available in DataSet and DataTable, using which we can load DataReader into DataSet/DataTable. In other way, DataSet and DataTable is having method named "getDataReader" which will return DataReader back from DataTable/DataSet. Even we can transfer between DataTable and DataView. Check out the following example,

Dim dr As SqlDataReader
Dim conn As New SqlConnection(Conn_str)
conn.Open()
Dim sqlc As New SqlCommand("Select * from Orders", conn)
dr = sqlc.ExecuteReader(CommandBehavior.CloseConnection)
Dim dt As New DataTable("Orders")
dt.Load(dr)

12. Batch Updates
In previous versions of ADO.NET, if we do changes to DataSet and update using DataAdapter.update method. It makes round trips to datasource for each modified rows in DataSet. This fine with few records, but if there is more than 100 records in modified. Then it will make 100 calls from DataAccess layer to DataBase which is not acceptable. In this release, MicroSoft have changed this behaiour by exposing one property called "UpdateBatchSize". Using this we can metion how we want to groups the rows in dataset for single hit to database. For example if we want to group 50 records per hit, then we need to mention "UpdateBatchSize" as 50.

13. Common Provider Model
In our application if want to implement provider independent DataAccess, then we need to write our own factory classes for returning the required objects like connection, command. And for implementing this feature only provider independent interface were available in the previous releases. But in ADO.NET 2.0, we have separate factory classes for managing common provider model. A new class "DbProviderFactory" is included in this release which has two methods. One method "GetFactoryClasses" to get all the provider installed in that machine and other one "GetFactory" will be used to get provider specific object by providing provider name as paramter.

Check out the following example, in which without knowing which provider we are going to work on we are fetching values from database. We need to pass only "Providername" which can configurable and which can change.

Dim pf As DbProviderFactory pf = DbProviderFactories.GetFactory(providername)
Using dbc As DbConnection = pf.CreateConnection
dbc.ConnectionString = Conn_str dbc.Open()
Dim comm As DbCommand = dbc.CreateCommand
comm.CommandText = "Select * from orders"
Dim dr As DbDataReader = comm.ExecuteReader(CommandBehavior.CloseConnection)
Dim ldt As New DataTable("Orders") l
dt.Load(dr)
End Using

14. Bulk Copy
If we think of bulk copy i.e. if we want to move some data from one datasource to another datasource. If will simply think of doing this in database, since we dont have much options in the previous release. But in ADO.NET 2.0, we can do this from DataAccess Layer itself.
New class called "SQLBulkCopy" is included in this release which will do this work for us. Using this class we can metion which datasource we want to copy and to which destination table we want to copy. We can even map the columns between tables, by default it will copy columns to columns. Check out the following example,

Dim dr As SqlDataReader
Dim conn As New SqlConnection(Conn_str)
Dim conn1 As New SqlConnection(Conn_str1)
conn.Open()
conn1.Open()
Dim sqlc As New SqlCommand("Select * from Orders", conn)
'dr = sqlc.ExecutePageReader(CommandBehavior.CloseConnection, 10, 10)

dr = sqlc.ExecuteReader(CommandBehavior.CloseConnection)
Dim dt As New DataTable("Orders")
Dim bulkcopy As New SqlBulkCopy(conn1)
bulkcopy.DestinationTableName = "MVPOrders"
bulkcopy.WriteToServer(dr)

15. Multiple Active ResultSets
Using this feature we can have more than one simultaneous pending request per connection i.e. multiple active datareader is possible. Previously when a DataReader is open and if we use that connection in another datareader, we used to get the following error "Systerm.InvalidOperationException: There is already an open DataReader associated with this connection which must be closed first". This error wont come now, as this is possible now because of MAR's. This feature is supported only in Yukon.

16. Conclusion
ADO.NET 2.0 provides many new and improved features for developers to improve the performance and reduce the code. In this article, I discussed top 15 features of ADO.NET 2.0

Wednesday, July 18, 2007

ADO.NET 2.0 - Part 1

Following is the new Features of ADO.NET 2.0


1. Data Paging
Custom paging is one of the major requirements in ASP.NET. Similarly if we take windows application also, paging is an important feature that is required. In previous releases, we need to write stored procedure for doing paging in our applications.

But in ADO.NET, we can do is very simply. An new API "ExecutePageReader" in SQLCommand will do all the stuff for we and return only the required records. This method is very similar to ExecuteReader but it will accept two extra parameter. One is "Starting row number" and other one is for "number of rows". This will also return datareader. For example, check out the following code snippet.


Dim dr As SqlDataReader

Dim conn As New SqlConnection(Conn_str)

conn.Open()

Dim sqlc As New SqlCommand("Select * from Orders", conn)

dr = sqlc.ExecutePageReader(CommandBehavior.CloseConnection, 10, 10)

2. Asynchronous Data Access
In ADO.NET 1.x commands like ExecuteReader,ExecuteScalar and ExecuteNonQuery will synchronously execute and block the current thread. Even when we open connection to the database, current thread is blocked. But in ADO.NET 2.0, all of these methods comes with Begin and End methods to support asynchronous execution.

This asynchrounous methodology is very similar to our .NET framework asynchronous methodology. Even we can have callback mechanism using this approach.

This Asynchrounous Data Access is currently only supported in SQLClient, but complete API support is available for other providers to implement this mechanism.

3. Connection Details
Now we can get more details about a connection by setting Connection's StatisticsEnabled property to True. The Connection object provides two new methods - RetrieveStatistics and ResetStatistics. The RetrieveStatistics method returns a HashTable object filled with the information about the connection such as data transferred, user details, curser details, buffer information and transactions.

4. DataSet, RemotingFormat Property
When DataSet.RemotingFormat is set to binary, the DataSet is serialized in binary format instead of XML tagged format, which improves the performance of serialization and deserialization operations significantly.

5. DataTable’s Load and Save Methods
In previous version of ADO.NET, only DataSet had Load and Save methods. The Load method can load data from objects such as XML into a DataSet object and Save method saves the data to a persistent media. Now DataTable also supports these two methods.
We can also load a DataReader object into a DataTable by using the Load method.

6. New Data Controls
In Toolbox, we will see these new controls - DataGridView, DataConnector, and DataNavigator. See Figure 1. Now using these controls, we can provide navigation (paging) support to the data in data bound controls.


Figure 1. Data bound controls.

7. DbProvidersFactories Class
This class provides a list of available data providers on a machine. We can use this class and its members to find out the best suited data provider for database when writing a database independent applications.

8. Customized Data Provider
By providing the factory classes now ADO.NET extends its support to custom data provider. Now we don't have to write a data provider dependent code. We use the base classes of data provider and let the connection string does the trick for we.
Ohter features will be released in next article.

Wednesday, July 4, 2007

C++ and Java

Comparision of C++ and Java

Advantages Of C++
Each computer language has a niche which it is known for. C++ boasts object oriented programming which is very segmented, easy to work with, and doesn't require very many lines of code to perform simple tasks. Although C++ is backwards-compatible with its predecessor, the C language, C is not object oriented while C++ is.

C++ is perhaps one of the easiest computer languages to learn as much of the syntax is very straight-forward. In fact, it is often taught in many college classrooms as a first language for Computer Science majors.

The language is not to be underestimated, however, as it is still extremely flexible and functional in the workforce.Although C++ is a high-level language, it is very powerful in that it allows the programmer benefits otherwise only available in the assembly (low-level) language.

For example, programmers have much control over memory management, as can be demonstrated with arrays and linked lists.Yet another benefit of C++ is its ability to handle OOP, or object oriented programming.

By using functions and what are known as classes, certain parts of the code may be re-used multiple times throughout the program. For example, suppose a function was written to add two numbers being passed into it, and to print out the result.

This function can be re-used multiple times by passing in two different numbers, each time.Perhaps one of the most important advantages to C++, however, is its ability to work in cross-platform environments. This is because of an ANSI standard.

In other words, C++ code can be used to develop programs for vast operating systems including MS-DOS, Windows, Macintosh, UNIX and Linux, to name just a few. Unfortunately, GUI (graphical user interface) development in C++ among operating systems varies greatly.

Microsoft Visual C++, for example, allows for graphics in Windows. QT, meanwhile, can be used on UNIX-based machines.

Advantages Of Java
Java is a fairly new language which has been developed to improvise on C++. Unlike C++, it is completely object oriented.
The use of classes in development is not optional.Java also boasts easier to implement pointers than C++. Linked lists are extremely easy to develop.
In addition, many methods (functions) for almost everything you could imagine are pre-defined.One of the biggest advantages of Java over C++ is that GUI development is cross-platform. T
he exact same code can be run on virtually any operating system. For this reason, Java is a viable solution for many web-based applications.

Tuesday, June 26, 2007

XML - 10 Points

10 points - to Know XML

1. XML is for structuring data
Structured data includes things like spreadsheets, address books, configuration parameters, financial transactions, and technical drawings. XML is a set of rules (you may also think of them as guidelines or conventions) for designing text formats that let you structure your data. XML is not a programming language, and you don't have to be a programmer to use it or learn it. XML makes it easy for a computer to generate data, read data, and ensure that the data structure is unambiguous. XML avoids common pitfalls in language design: it is extensible, platform-independent, and it supports internationalization and localization. XML is fully Unicode-compliant.

2. XML looks a bit like HTML
Like HTML, XML makes use of tags (words bracketed by '<' and '>') and attributes (of the form name="value"). While HTML specifies what each tag and attribute means, and often how the text between them will look in a browser, XML uses the tags only to delimit pieces of data, and leaves the interpretation of the data completely to the application that reads it. In other words, if you see "

" in an XML file, do not assume it is a paragraph. Depending on the context, it may be a price, a parameter, a person, a p... (and who says it has to be a word with a "p"?).

3. XML is text, but isn't meant to be read
Programs that produce spreadsheets, address books, and other structured data often store that data on disk, using either a binary or text format. One advantage of a text format is that it allows people, if necessary, to look at the data without the program that produced it; in a pinch, you can read a text format with your favorite text editor. Text formats also allow developers to more easily debug applications. Like HTML, XML files are text files that people shouldn't have to read, but may when the need arises. Compared to HTML, the rules for XML files allow fewer variations. A forgotten tag, or an attribute without quotes makes an XML file unusable, while in HTML such practice is often explicitly allowed. The official XML specification forbids applications from trying to second-guess the creator of a broken XML file; if the file is broken, an application has to stop right there and report an error.

4. XML is verbose by design
Since XML is a text format and it uses tags to delimit the data, XML files are nearly always larger than comparable binary formats. That was a conscious decision by the designers of XML. The advantages of a text format are evident (see point 3), and the disadvantages can usually be compensated at a different level. Disk space is less expensive than it used to be, and compression programs like zip and gzip can compress files very well and very fast. In addition, communication protocols such as modem protocols and HTTP/1.1, the core protocol of the Web, can compress data on the fly, saving bandwidth as effectively as a binary format.

5. XML is a family of technologies
XML 1.0 is the specification that defines what "tags" and "attributes" are. Beyond XML 1.0, "the XML family" is a growing set of modules that offer useful services to accomplish important and frequently demanded tasks. XLink describes a standard way to add hyperlinks to an XML file. XPointer is a syntax in development for pointing to parts of an XML document. An XPointer is a bit like a URL, but instead of pointing to documents on the Web, it points to pieces of data inside an XML file. CSS, the style sheet language, is applicable to XML as it is to HTML. XSL is the advanced language for expressing style sheets. It is based on XSLT, a transformation language used for rearranging, adding and deleting tags and attributes. The DOM is a standard set of function calls for manipulating XML (and HTML) files from a programming language. XML Schemas 1 and 2 help developers to precisely define the structures of their own XML-based formats. There are several more modules and tools available or under development. Keep an eye on W3C's technical reports page.

6. XML is new, but not that new
Development of XML started in 1996 and it has been a W3C Recommendation since February 1998, which may make you suspect that this is rather immature technology. In fact, the technology isn't very new. Before XML there was SGML, developed in the early '80s, an ISO standard since 1986, and widely used for large documentation projects. The development of HTML started in 1990. The designers of XML simply took the best parts of SGML, guided by the experience with HTML, and produced something that is no less powerful than SGML, and vastly more regular and simple to use. Some evolutions, however, are hard to distinguish from revolutions... And it must be said that while SGML is mostly used for technical documentation and much less for other kinds of data, with XML it is exactly the opposite.

7. XML leads HTML to XHTML
There is an important XML application that is a document format: W3C's XHTML, the successor to HTML. XHTML has many of the same elements as HTML. The syntax has been changed slightly to conform to the rules of XML. A format that is "XML-based" inherits the syntax from XML and restricts it in certain ways (e.g, XHTML allows "
", but not ""); it also adds meaning to that syntax (XHTML says that "
" stands for "paragraph", and not for "price", "person", or anything else).

8. XML is modular
XML allows you to define a new document format by combining and reusing other formats. Since two formats developed independently may have elements or attributes with the same name, care must be taken when combining those formats (does "
" mean "paragraph" from this format or "person" from that one?). To eliminate name confusion when combining formats, XML provides a namespace mechanism. XSL and RDF are good examples of XML-based formats that use namespaces. XML Schema is designed to mirror this support for modularity at the level of defining XML document structures, by making it easy to combine two schemas to produce a third which covers a merged document structure.

9. XML is the basis for RDF and the Semantic Web
W3C's Resource Description Framework (RDF) is an XML text format that supports resource description and metadata applications, such as music playlists, photo collections, and bibliographies. For example, RDF might let you identify people in a Web photo album using information from a personal contact list; then your mail client could automatically start a message to those people stating that their photos are on the Web. Just as HTML integrated documents, images, menu systems, and forms applications to launch the original Web, RDF provides tools to integrate even more, to make the Web a little bit more into a Semantic Web. Just like people need to have agreement on the meanings of the words they employ in their communication, computers need mechanisms for agreeing on the meanings of terms in order to communicate effectively. Formal descriptions of terms in a certain area (shopping or manufacturing, for example) are called ontologies and are a necessary part of the Semantic Web. RDF, ontologies, and the representation of meaning so that computers can help people do work are all topics of the Semantic Web Activity.

10. XML is license-free, platform-independent and well-supported
By choosing XML as the basis for a project, you gain access to a large and growing community of tools (one of which may already do what you need!) and engineers experienced in the technology. Opting for XML is a bit like choosing SQL for databases: you still have to build your own database and your own programs and procedures that manipulate it, but there are many tools available and many people who can help you. And since XML is license-free, you can build your own software around it without paying anybody anything. The large and growing support means that you are also not tied to a single vendor. XML isn't always the best solution, but it is always worth considering.

Wednesday, June 13, 2007

C# vs VB.NET

Advantages of C# over VB.NET and vice versa
The choice between C# and VB.NET is largely one of subjective preference. Some people like C#'s terse syntax, others like VB.NET's natural language, case-insensitive approach.

Both have access to the same framework libraries. Both will perform largely equivalently (with a few small differences which are unlikely to affect most people, assuming VB.NET is used with Option Strict on).

Learning the .NET framework itself is a much bigger issue than learning either of the languages, and it's perfectly possible to become fluent in both. There are, however, a few actual differences which may affect your decision:


VB.NET Advantages

  • Support for optional parameters - very handy for some COM interoperability
  • Support for late binding with Option Strict off - type safety at compile time goes out of the window, but legacy libraries which don't have strongly typed interfaces become easier to use.
  • Support for named indexers (aka properties with parameters).
  • Various legacy VB functions (provided in the Microsoft.VisualBasic namespace, and can be used by other languages with a reference to the Microsoft.VisualBasic.dll). Many of these can be harmful to performance if used unwisely, however, and many people believe they should be avoided for the most part.
  • The with construct: it's a matter of debate as to whether this is an advantage or not, but it's certainly a difference.
  • Simpler (in expression - perhaps more complicated in understanding) event handling, where a method can declare that it handles an event, rather than the handler having to be set up in code.
  • The ability to implement interfaces with methods of different names. (Arguably this makes it harder to find the implementation of an interface, however.)
  • Catch ... When ... clauses, which allow exceptions to be filtered based on runtime expressions rather than just by type.
  • The VB.NET part of Visual Studio .NET compiles your code in the background. While this is considered an advantage for small projects, people creating very large projects have found that the IDE slows down considerably as the project gets larger.

C# Advantages

  • XML documentation generated from source code comments. (This is coming in VB.NET with Whidbey (the code name for the next version of Visual Studio and .NET), and there are tools which will do it with existing VB.NET code already.)
  • Operator overloading - again, coming to VB.NET in Whidbey.
  • Language support for unsigned types (you can use them from VB.NET, but they aren't in the language itself). Again, support for these is coming to VB.NET in Whidbey.
  • The using statement, which makes unmanaged resource disposal simple.
  • Explicit interface implementation, where an interface which is already implemented in a base class can be reimplemented separately in a derived class. Arguably this makes the class harder to understand, in the same way that member hiding normally does.
  • Unsafe code. This allows pointer arithmetic etc, and can improve performance in some situations. However, it is not to be used lightly, as a lot of the normal safety of C# is lost (as the name implies). Note that unsafe code is still managed code, i.e. it is compiled to IL, JITted, and run within the CLR.

Monday, May 28, 2007

OST [Open Source Technology]

Open Source advantages
What does Open Source mean and why is it so important to so many others ? We discuss this matter here, especially the advantages for businesses as yours.

``Open Source promotes software reliability and quality by supporting independent peer review and rapid evolution of source code. '' - opensource.org

Openness
All advantages of Open Source are a result of (ta-ta) its openness. Having the code makes it easy to resolve problems (by yourself or the next guy) which means that you don't have to rely on only one vendor for fixing potential problems. This is very important to understand everything that follows.

Stability
Since you can rely on anyone and since the license states that any modification shipped elsewhere should be equally open, this means that after a period of time Open Source software is more stable then most commercially distributed software. (beware: Open Source doesn't necessarily mean you don't have to pay for it, though it usually is a result of its freedom.)

Adaptability
Open Source software means Open Standards, thus it is easy to adapt software to work closely with other Open Source software and even closed protocols and proprietary applications. This solves vendor lock-in situations which ties your hands and knees to one and only one vendor if you choose one's products.

Quality
Not only does software evolve onto a stable product, a large userbase also supplies new possibilities, making it a feature-rich solution. More new features, less bugs and a broader (testing) audience (peer-review) are significant to the quality of a product.

Innovation
Competition is what drives innovation and Open Source keeps competition alive. As noone has any unfair advantages, everybody has the possibility to add value and provide services. Information wants to be free.

Security:
It is widely known that security by obscurity is not a secure practice in the long run. By opening the code and by wide adoption of Open Source software, it grows more secure. Generally, new Open Source projects tend to be insecure, but once a project matures and becomes production-ready, it is more reliable and more secure than most available commercial software.

Zero-price tag ?
Although Open Source doesn't necessarily mean that it doesn't cost a dime. Most Open Source software is freely available and doesn't cost any additional licenses per user/year. This allows us to cut down in price and spend more time to create more secure and adapted solutions than commercial consultancy firms.

Thursday, May 10, 2007

Linux

What is Linux ?
Linux is an open operating system available under the GPL. This means the source code is freely available.

Anyone distributing machine executable versions of this code, should also be able to provide the source code. Also any changes to the source code should be available under the same licensing conditions.

Linux is mainly developed by volunteers all over the world, although the IT industry starts contributing as well.


Linux runs on widely differing hardware platforms ranging from small embedded systems over commodity personal computers to huge clusters for processor intensive jobs like scientific calculations or 3D rendering.

CPU architectures supported include IA32 (Intel, AMD, Cyrix,...), IA64 (Intel), m68k (Motorola), PowerPC(IBM/Motorola), Sparc (Sun), Sparc64 (Sun), MIPS, ARM, Alpha (Compaq/Digital).


Technically the term 'Linux' denotes only the kernel of the operating system. Various companies and groups of volunteers have build Linux distributions around this kernel.

A Linux distribution contains all necessary tools and programs to install and maintain the system, perform basic operations and develop software. In addition to this a number of applications are also included such as a web browser, MUA, news reader, bitmap editor, audio manipulation tools,... Almost all of these application programs carry a similar open license as the Linux kernel.


Key advantages of Linux
Linux source code is freely distributed- Tens of thousands of programmers have reviewed the source code to improve performance, eliminate bugs, and strengthen security. No other operating system has ever undergone this level of review. This Open Source design has created most of the advantages listed below.


Linux has the best technical support available- Linux is supported by commercial distributors, consultants, and by a very active community of users and developers. In 1997, the Linux community was awarded InfoWorld's Product of the Year Award for Best Technical Support over all commercial software vendors.


Linux has no vendor lock-in.- The availability of source code means that every user and support provider is empowered to get to the root of technical problems quickly and effectively. This contrasts sharply with proprietary operating systems, where even top-tier support providers must rely on the OS vendor for technical information and bug fixes.


Linux runs on a wide range of hardware-Most Linux systems are based on standard PC hardware, and Linux supports a very wide range of PC devices. However, it also supports a wide range of other computer types, including Alpha, Power PC, 680x0, SPARC, and Strong Arm processors, and system sizes ranging from PDAs (such as the PalmPilot) to supercomputers constructed from clusters of systems (Beowulf clusters).


Linux is exceptionally stable- Properly configured, Linux systems will generally run until the hardware fails or the system is shut down. Continuous up-times of hundreds of days (up to a year or more) are not uncommon.


Linux has the tools and applications you need- Programs ranging from the market-dominating Apache web server to the powerful GIMP graphics editor are included in most Linux distributions. Free and commercial applications meet are available to meet most application needs.


Linux interoperates with many other types of computer systems- Linux communicates using the native networking protocols of Unix, Microsoft Windows 95/NT, IBM OS/2, Netware, and Macintosh systems and can also read and write disks and partitions from these and other operating systems.


Linux has a low total cost of ownership-Although the Linux learning curve is significant, the stability, design, and breadth of tools available for Linux result in very low ongoing operating costs.

Linux: ``all for one and one for all?? All changes one makes in Open Source software will benefit each and everyone, all over the world. Without exceptions or constraints.

Linux is fun!

Monday, April 30, 2007

Servlets and JSP: An Overview

1. What are Java Servlets?
Servlets are Java technology's answer to CGI programming. They are programs that run on a Web server and build Web pages. Building Web pages on the fly is useful (and commonly done) for a number of reasons:

The Web page is based on data submitted by the user: For example the results pages from search engines are generated this way, and programs that process orders for e-commerce sites do this as well.

The data changes frequently- For example, a weather-report or news headlines page might build the page dynamically, perhaps returning a previously built page if it is still up to date.

The Web page uses information from corporate databases or other such sources- For example, you would use this for making a Web page at an on-line store that lists current prices and number of items in stock.

2. What are the Advantage of Servlets Over "Traditional" CGI?
Java servlets are more efficient, easier to use, more powerful, more portable, and cheaper than traditional CGI and than many alternative CGI-like technologies. (More importantly, servlet developers get paid more than Perl programmers :-).

Efficient.-With traditional CGI, a new process is started for each HTTP request. If the CGI program does a relatively fast operation, the overhead of starting the process can dominate the execution time. With servlets, the Java Virtual Machine stays up, and each request is handled by a lightweight Java thread, not a heavyweight operating system process.

Similarly, in traditional CGI, if there are N simultaneous request to the same CGI program, then the code for the CGI program is loaded into memory N times.

With servlets, however, there are N threads but only a single copy of the servlet class. Servlets also have more alternatives than do regular CGI programs for optimizations such as caching previous computations, keeping database connections open, and the like.

Convenient- Hey, you already know Java. Why learn Perl too? Besides the convenience of being able to use a familiar language, servlets have an extensive infrastructure for automatically parsing and decoding HTML form data, reading and setting HTTP headers, handling cookies, tracking sessions, and many other such utilities.

Powerful-Java servlets let you easily do several things that are difficult or impossible with regular CGI. For one thing, servlets can talk directly to the Web server (regular CGI programs can't). This simplifies operations that need to look up images and other data stored in standard places. Servlets can also share data among each other, making useful things like database connection pools easy to implement. They can also maintain information from request to request, simplifying things like session tracking and caching of previous computations.

Portable- Servlets are written in Java and follow a well-standardized API. Consequently, servlets written for, say I-Planet Enterprise Server can run virtually unchanged on Apache, Microsoft IIS, or WebStar. Servlets are supported directly or via a plugin on almost every major Web server.

Inexpensive- There are a number of free or very inexpensive Web servers available that are good for "personal" use or low-volume Web sites. However, with the major exception of Apache, which is free, most commercial-quality Web servers are relatively expensive.

Nevertheless, once you have a Web server, no matter the cost of that server, adding servlet support to it (if it doesn't come preconfigured to support servlets) is generally free or cheap.

3. What is JSP?
Java Server Pages (JSP) is a technology that lets you mix regular, static HTML with dynamically-generated HTML. Many Web pages that are built by CGI programs are mostly static, with the dynamic part limited to a few small locations. But most CGI variations, including servlets, make you generate the entire page via your program, even though most of it is always the same. JSP lets you create the two parts separately. Here's an example:

Welcome to Our Store
Welcome,
To access your account settings, click here.

Regular HTML for all the rest of the on-line store's Web page.

4. What are the Advantages of JSP?
vs. Active Server Pages (ASP)- ASP is a similar technology from Microsoft. The advantages of JSP are twofold. First, the dynamic part is written in Java, not Visual Basic or other MS-specific language, so it is more powerful and easier to use. Second, it is portable to other operating systems and non-Microsoft Web servers.

vs. Pure Servlets- JSP doesn't give you anything that you couldn't in principle do with a servlet. But it is more convenient to write (and to modify!) regular HTML than to have a zillion println statements that generate the HTML. Plus, by separating the look from the content you can put different people on different tasks: your Web page design experts can build the HTML, leaving places for your servlet programmers to insert the dynamic content.

vs. Server-Side Includes (SSI)-SSI is a widely-supported technology for including externally-defined pieces into a static Web page. JSP is better because it lets you use servlets instead of a separate program to generate that dynamic part. Besides, SSI is really only intended for simple inclusions, not for "real" programs that use form data, make database connections, and the like.

vs. JavaScript- JavaScript can generate HTML dynamically on the client. This is a useful capability, but only handles situations where the dynamic information is based on the client's environment. With the exception of cookies, HTTP and form submission data is not available to JavaScript. And, since it runs on the client, JavaScript can't access server-side resources like databases, catalogs, pricing information, and the like.

vs. Static HTML-Regular HTML, of course, cannot contain dynamic information. JSP is so easy and convenient that it is quite feasible to augment HTML pages that only benefit marginally by the insertion of small amounts of dynamic data. Previously, the cost of using dynamic data would preclude its use in all but the most valuable instances.

Wednesday, April 18, 2007

Digital Signatures in Java

In public key cryptography, there are two keys. One is used by the sender and is usually private. One is used by the receiver and is usually public.

The sender uses the private key to encode a message or data, and the receiver uses the public key to decode the message.

Digital signatures work just like public key cryptography. The signer encodes data with his own private key, and then anyone with his public key can decode it. This allows any receiver to verify the source or signer of data as accurate and guarantee its integrity and authenticity.


To set up a digital signature in Java, you first need to set up a private key, usually by using keytool or the security API methods. Programmers often use the Java Certificate feature to securely verify public key authenticity.

After you have a public key, you generate a digital signature using the jarsigner tool or the API methods. Use the Signature class to sign the data by creating the signature object, initialize it for signing, processing the data, and then sign it. After it's signed, you export the objects into files for shipping to the receiver.

Once the data is signed, you send the receiver the data and signature. You must supply the receiver with the public key corresponding to the private key you used to generate the signature. The receiver imports the public key then uses the key to verify integrity. The receiver can verify by grabbing the object, initializing it for verification, processing the data, and then comparing the signature.

You need two applications to use Java's digital signature feature. One application generates the digital signature (the sender). The other application verifies authenticity (the receiver).


The Sender Code
The methods for the sending code are part of the java.security package and are usually placed between try and catch blocks. The first step is to produce the public and private keys.

In order to create a digital signature, you need a private key. The program needs to generate a key pair by using the KeyPairGenerator class. First, you need to create the key pair generator, by calling the getInstance method on the KeyPairGenerator class. You can use a number of different signature algorithms for the generator (Sun Microsystems actually provides a Digital Signature Algorithm, or DSA).

After creating the key pair generator you must initialize it. The KeyPairGenerator class has an initialize method that takes two types of arguments, one for keysize and one for randomness. The keysize is the key length (in bits). The source of randomness must be an instance of the SecureRandom class. Finally, you generate the pair of keys and store them in Privatekey and Publickey objects.

/* create a key pair generator */

KeyPairGenerator instance = KeyPairGenerator.getInstance("signaturealgorithm");

/* Initialize the keypair generator */

SecureRandom random = SecureRandom.getInstance("algorithm", "provider");Instance.initialize(sizeinbits, randomsource);

/* store the pair of keys */

KeyPair pair = instance.generateKeyPair();PrivateKey private = pair.getPrivate();PublicKey public = pair.getPublic();


Signing the data is the second step. A digital signature is created and verified using an instance of the Signature class. First you create a Signature object using the signature algorithm you chose (for example Sun Microsystem's DSA). You must then use a private key to initialize the signature object. You then supply the data to be signed to the Signature object by calling the update method.


Once all of the data has been given to the Signature object, you generate the signature of the data. Then you save the Signature bytes in one file and the public key bytes in another so you can send them. You will have three pieces to send; the data, the signature, and the public key. The signature is placed in a byte array. The public key is placed in a PublicKey object. You can get the encoded key bytes by calling the getEncoded method and store the bytes in a file.

/* create signature object */

Signature signaturealgorythm = Signature.getInstance("algorithm", "provider")

/* initialize signature object */

signaturealgorythm.initializeSignature(private);

/* call for update methods belong here*/

/* generate signature */

Byte[] realSignature = signaturealgorythm.signature();

/* save the signature and public key in files */


The Receiver
To verify the signature and file, a receiver needs the data, the signature, and the public key. The methods for verifying data are part of the java.security package, and are usually placed between try and catch blocks.


The code needs to import the encoded public key bytes and convert them to PublicKey. PublicKey is necessary because that is what the Signature initVerify method requires to initialize the Signature object for verification. Once you hold the encoded public key bytes, you can then use the KeyFactory class to instantiate a public key from it's encoding. You need a key specification, a KeyFactory object to do the conversion, and then you use the KeyFactory object to generate a PublicKey from the key specification.


The signature is verified using an instance of the Signature class. You need to create a Signature object that uses the same algorithm that was used to generate the signature. Then you need to initialize the signature object and give the Signature object the data that needs to be verified by again calling the update method.

Once the Signature object has all of the data, you can verify the signature. The signature was read into a byte array, and a Boolean verifies value can be set to true if the alleged signature is the actual signature of the specified data file generated by the private key corresponding to the public key.