Step one: ask whether clockwise or counterclockwise, the degree to rotate, whether the matrix is a square matrix or not.
Step two: talk with inteviewers about the solution.
Step three: write code. it's OK if there are bugs, make sure give test cases to find out them.
Thought: rotate the matrix layer by layer. On each layer, first rotate the four corners, then rotate next one till the last. first=layer and last = n-layer-1 (excluded)
void matrixRotation(int [][] matrix, int n){
for(int layer =0; layer < n/2; layer++){
//if n is odd,the most inner element doesn't need to be rotate
int first = layer;
int last = n-layer -1;
for(int i=first; i<last; i++){
int offset = i-first;
//first save the top value
int top = matrix[first][i];
//left-->top
matrix[first][i] = matrix[last-offset][first];
//bottom-->left
matrix[last-offset][first]=matrix[last][last-offset];
//right-->bottom
matrix[last][last-offset]= matrix[i][last];
//top-->right
matrix[i][last] = top;
}
}
}
The fear of the LORD is the beginning of wisdom, and knowledge of the Holy One is understanding.
Wednesday, April 30, 2014
Monday, April 28, 2014
Complexity Analysis for Recursion
Big-Oh for Recursive Functions: Recurrence Relations
It's not easy trying to determine the asymptotic complexity (using big-Oh) of recursive functions without an easy-to-use but underutilized tool. This web page gives an introduction to how recurrence relations can be used to help determine the big-Oh running time of recursive functions.Motivation
Given a binary tree, is it a search tree?
In part A students are asked to write the function ValsLess:
In Part B, students are asked to write IsBST using ValsLess and assuming that a similar function ValsGreater exists. The solution is shown below:
Before continuing you should try to determine/guess/reason about what the complexity of IsBST is for an n-node tree. Assume that ValsLess and ValsGreater both run in O(n) time for an n-node tree.
A function with similar characteristics
What is the asymptotic complexity of the function DoStuff shown below. Why? Assume that the function Combine runs in O(n) time when |left-right| = n, i.e., when Combine is used to combine n elements in the vector a.
You may recognize this function as an implementation of Mergesort. You may also remember that the complexity of Mergesort is O(n log n) fo an n-element array/vector. How does this relate to the function IsBST?
We'll make a mathematical definition:
The Recurrence Relation
'';"> '';"> '';">Let T(n) be the time for DoStuff to execute on an n-element vector, i.e., when |left-right| = n. Note that the time for DoStuff to execute on a one element vector is O(1), constant time.
Then we have the following relationship:
T(n) = 2 T(n/2) + O(n) [the O(n) is for Combine] T(1) = O(1)
This relationship is called a recurrence relation because the function T(..) occurs on both sides of the = sign. This recurrence relation completely describes the function DoStuff, so if we could solve the recurrence relation we would know the complexity of DoStuff sinceT(n) is the time for DoStuff to execute.
Base Case
When you write a recurrence relation you must write two equations: one for the general case and one for the base case. These correspond to the recursive function to which the recurrence applies. The base case is often an O(1) operation, though it can be otherwise. In some recurrence relations the base case involves input of size one, so we wrote T(1) = O(1). However, there are cases when the bse case has size zero. In such cases the base could would be T(0) = O(1).
How does this relate to the time for IsBST to execute? If you look carefully at the code for IsBST you'll see that it has the same form as the function DoStuff, so that IsBST will have the same recurrence relation as DoStuff. This means that if you accept that DoStuff is anO(n log n) function, then IsBST is also an O(n log n) function.
Solving Recurrence Relations
You can actually solve the recurrence relation given above. We'll sketch how to do that here.We'll write n instead of O(n) in the first line below because it makes the algebra much simpler.
T(n) = 2 T(n/2) + n = 2 [2 T(n/4) + n/2] + n = 4 T(n/4) + 2n = 4 [2 T(n/8) + n/4] + 2n = 8 T(n/8) + 3n = (ask your class to fill in this line, or you fill it in) you should have written: 16 T(n/16) + 4n = 2k T(n/2k) + k n [this is the Eureka! line]
You could ask students to fill in parts of the last line. Note that the last line is derived by seeing a pattern --- this is the Eureka/leap of faith/practice with generalizing mathematical patterns part of the problem.
We know that T(1) = 1 and this is a way to end the derivation above. In particular we want T(1) to appear on the right hand side of the = sign. This means we want:
n/2k = 1 OR n = 2k OR log2 n = k
Continuing with the previous derivation we get the following since k = log2 n:
= 2k T(n/2k) + k n = 2log2 n T(1) + (log2n) n = n + n log2 n [remember that T(1) = 1] = O(n log n)
So we've solved the recurrence relation and its solution is what we "knew" it would be. To make this a formal proof you would need to use induction to show that O(n log n) is the solution to the given recurrence relation, but the "plug and chug" method shown above shows how to derive the solution --- the subsequent verification that this is the solution is something that can be left to a more advanced algorithms class.
Recurrence Relations to Remember
My suggestion is to do what we do in our classes: show students the "big five" recurrence relations below and ask them to remember what algorithms these are associated with. We ask our students to solve other recurrence relations, but we really want them to reason about recursive functions using the recurrence relations below more than knowing how to solve any given recurrence relation. We also want students to be able to derive a recurrence relation from a recursive function --- more on that later.- T(n) = T(n/2) + O(1)
- T(n) = T(n-1) + O(1)
- T(n) = 2 T(n/2) + O(1)
- T(n) = T(n-1) + O(n)
- T(n) = 2 T(n/2) + O(n)
Before continuing, or with your class, try to fit each of the above recurrence relations to an algorithm and thus to its big-Oh solution. We'll show what these are below. Of course for practice you can ask your students to derive the solutions to the recurrence relations using the plug-and-chug method.
Recurrence | Algorithm | Big-Oh Solution |
---|---|---|
T(n) = T(n/2) + O(1) | Binary Search | O(log n) |
T(n) = T(n-1) + O(1) | Sequential Search | O(n) |
T(n) = 2 T(n/2) + O(1) | tree traversal | O(n) |
T(n) = T(n-1) + O(n) | Selection Sort (other n2 sorts) | O(n2) |
T(n) = 2 T(n/2) + O(n) | Mergesort (average case Quicksort) | O(n log n) |
Practice Problem
Find the kth largest element in a vector where the kth largest is greater than k elements so that the Oth largest is the smallest element, the 3rd largest is greater than three elements (will have index 3 if the vector is sorted) and so on. Assume that there are no duplicate elements in the vector to make the solution easier.
The solution below correctly solves the problem. It makes a call to the partition function from Quicksort. Assume that the partition function runs in O(n) time for an n-element vector/vector-segment. For completeness we'll include a partition function at the end of this document.
For an n-element vector a the call FindKth(a,0,n-1,k) returns the kth element in a:
What is the big-Oh complexity of FindKth in the worst-case and in the average-case. Since it's difficult to reason precisely about average-case without more mathematical sophistication than we want to use, assume that things behave nicely in the average-case. As it turns out, this gives the right answer for most definitions of average-case. In later courses we can define more precisely what average case means.
Worst-case for FindKth
In the worst case the partition function is malicious and partitions the vector poorly every time it's called. A poor parition is one where the first element in the segment being partitioned is the pivot/paritition element. This means that the segment of the vector in which the kthlargest element appears will decrease by one in each recursive call so that not much progress is made in each single instantiation of the function FindKth.
If T(n) is the time for FindKth to execute for an n-element vector, the recurrence relation in the worst-case is:
Where the O(n) term comes from Partition. Note that there is only one recursive call made in FindKth.
This is one of the big-five recurrences, it's solution is O(n2) so that FindKth in the worst-case is an n2 function.
Average-case for FindKth
In the average case we're assuming good things happen. This means that the partition function will divide the vector into two roughly equal pieces each time it's called. Although this seems like a lot to ask for, it approximates what happens well enough to yield the correct complexity for FindKth in the average case.
The recurrence relation for the average case is
This isn't one of the "big five", so you'll have to solve it yourself to determine the average-case complexity of FindKth. Hint: it's pretty good.
A Partition Function
Combinations VS Permutation
Comination
Given two integers n and k, return all possible combinations of k numbers out of 1 ... n.For example,
If n = 4 and k = 2, a solution is:
[
[1,2],
[1,3],
[1,4],
[2,4],[2,3],
[3,4],
]
Solution
It's a way to simulate the process of getting 2 elements out of 4,
1. In round one, select 1 from 1...4 as the first number, you have [1]
2. next recursion, select one element from 2...n, you have [1,2], [1, 3], [1,4]
3 back to round one, select 2 from 2...4 as the first number, you have [2]
4. next recursion, select one element from 3..4, you have [2.3], [2,4]
......
Permutation
Get all permutations of a stringSolution
Swap the first element with all the other elements and permute on the rest of the stringSunday, April 27, 2014
Dijkstra’s shortest path algorithm
Pseudo code (from Algorithm Class):
for each Vertex v
if(v is source) Distance(v) =0, v.parent = null;
else Distance[v]=INT_MAX;
Create an empty priority queue PQ;
for each Vertex V
PQ.insert(v, Distance(v));
while(PQ not empty){
V = PQ.removeMin();
for each neighbor U of V
if(Distance(V) + EdgeLenghth(U, V) < Distance(U)){
Distance(U) = Distance(V) + EdgeLenghth(U, V) ;
U.parent = V;
}
}
Reference
http://www.geeksforgeeks.org/greedy-algorithms-set-6-dijkstras-shortest-path-algorithm/
Friday, April 25, 2014
Tree Traversal Summary
Depth-first traversal is easily implemented via a stack, including recursively (via the call stack), while breadth-first traversal is easily implemented via a queue, including corecursively
The recursive solutions are trivial, so this article mainly focuses on iterative solutions.
Pre-order traversal while duplicating nodes and values can make a complete duplicate of a binary tree. It can also be used to make a prefix expression (Polish notation) from expression trees: traverse the expression tree pre-orderly.
In-order traversal is very commonly used on binary search trees because it returns values from the underlying set in order, according to the comparator that set up the binary search tree (hence the name).
Post-order traversal while deleting or freeing nodes and values can delete or free an entire binary tree. It can also generate a postfix representation of a binary tree.
All the above implementations require call stack space proportional to the height of the tree. In a poorly balanced tree, this can be considerable. We can remove the stack requirement by maintaining parent pointers in each node, or by threading the tree
BFS:
Time complexity is O(|V|) where |V| is the number of nodes,you need to traverse all nodes.
Space complecity is O(|V|) as well - since at worst case you need to hold all vertices in the queue.
DFS:
Time complexity is again O(|V|), you need to traverse all nodes.
Space complexity - depends on the implementation, a recursive implementation can have a O(h) space complexity [worst case], where h is the maximal depth of your tree.
Using an iterative solution with a stack is actually the same as BFS, just using a stack instead of a queue - so you get both O(|V|) time and space complexity.
Note that the space complexity and time complexity is a bit different for a tree than for a general graphs, because you do not need to maintain a visited set for a tree, and |E| = O(|V|), so the |E| factor is actually redundant for a tree.
Depth First Search (DFS)
The recursive solutions are trivial, so this article mainly focuses on iterative solutions.
preorder traversal
inorder traversal
postorder traversal
Application
Pre-order traversal while duplicating nodes and values can make a complete duplicate of a binary tree. It can also be used to make a prefix expression (Polish notation) from expression trees: traverse the expression tree pre-orderly.
In-order traversal is very commonly used on binary search trees because it returns values from the underlying set in order, according to the comparator that set up the binary search tree (hence the name).
Post-order traversal while deleting or freeing nodes and values can delete or free an entire binary tree. It can also generate a postfix representation of a binary tree.
All the above implementations require call stack space proportional to the height of the tree. In a poorly balanced tree, this can be considerable. We can remove the stack requirement by maintaining parent pointers in each node, or by threading the tree
Breadth First Search (BFS)
Tree Traversal Complexity
BFS:
Time complexity is O(|V|) where |V| is the number of nodes,you need to traverse all nodes.
Space complecity is O(|V|) as well - since at worst case you need to hold all vertices in the queue.
DFS:
Time complexity is again O(|V|), you need to traverse all nodes.
Space complexity - depends on the implementation, a recursive implementation can have a O(h) space complexity [worst case], where h is the maximal depth of your tree.
Using an iterative solution with a stack is actually the same as BFS, just using a stack instead of a queue - so you get both O(|V|) time and space complexity.
Note that the space complexity and time complexity is a bit different for a tree than for a general graphs, because you do not need to maintain a visited set for a tree, and |E| = O(|V|), so the |E| factor is actually redundant for a tree.
Graph Traversal Complexity
Space complexity
When the number of vertices in the graph is known ahead of time, and additional data structures are used to determine which vertices have already been added to the queue, the space complexity can be expressed as where is the cardinality of the set of vertices. If the graph is represented by an Adjacency list it occupies space in memory, while an Adjacency matrix representation occupies .
Time complexity
The time complexity can be expressed as since every vertex and every edge will be explored in the worst case. Note: may vary between and , depending on how sparse the input graph is (assuming that the graph is connected).
Thursday, April 24, 2014
LeetCode:Reverse Nodes in k-Group
Given a linked list, reverse the nodes of a linked list k at a time and return its modified list.
If the number of nodes is not a multiple of k then left-out nodes in the end should remain as it is.
You may not alter the values in the nodes, only nodes itself may be changed.
Only constant memory is allowed.
For example,
Given this linked list: 1->2->3->4->5
For k = 2, you should return: 2->1->4->3->5
For k = 3, you should return: 3->2->1->4->5
Solution:
Accumulate K nodes and reverse them. In the function "reverse," we pass curNode->next as the end condition.
If the number of nodes is not a multiple of k then left-out nodes in the end should remain as it is.
You may not alter the values in the nodes, only nodes itself may be changed.
Only constant memory is allowed.
For example,
Given this linked list: 1->2->3->4->5
For k = 2, you should return: 2->1->4->3->5
For k = 3, you should return: 3->2->1->4->5
Solution:
Accumulate K nodes and reverse them. In the function "reverse," we pass curNode->next as the end condition.
LeeCode: Longest Consecutive Sequence
Given an unsorted array of integers, find the length of the longest consecutive elements sequence.
For example,
Given [100, 4, 200, 1, 3, 2],
The longest consecutive elements sequence is [1, 2, 3, 4]. Return its length: 4.
Your algorithm should run in O(n) complexity.
For example,
Given [100, 4, 200, 1, 3, 2],
The longest consecutive elements sequence is [1, 2, 3, 4]. Return its length: 4.
Your algorithm should run in O(n) complexity.
Analysis:
This is a typical problem to solve by sacrificing space to buy time. Since it is required O(n) complexity, we cannot simply sort it with common sorting algorithms. Borrowed the smart idea of using two unordered_map (representing a range with first and last) and the most neat coding style:LeetCode: Merge k Sorted Lists
Merge k sorted linked lists and return it as one sorted list. Analyze and describe its complexity.
The key of this problem is to analyze its complexity. Suppose the average number of nodes in all list is n
Solution1: Loop over the heads of all list and find the smallest one and add it to the final list. The worst case is O(k*kn), because you have to compare k times to get a smallest node and there are kn nodes in total.
Solution2: first merge all lists one by one as follows. The worst case is O(kn) because you only need to compare once to get a valid node for the final list.
Solution3: Use a Priority Queue. About the complexity of building a priority queue with an array, a linked list, a BST, or a Heap, please see
http://pages.cs.wisc.edu/~vernon/cs367/notes/11.PRIORITY-Q.html. The worst case is O(kn*logk). Here in the code notice that after you pop out a node node from the priority queue, you push node->next into the PQ; Therefore, the maximum number of nodes in the PQ is k and the insertion and deletion is both in O(logk).
Heap (minHeap):
For the insert operation, we start by adding a value to the end of the array (constant time, assuming the array doesn't have to be expanded); then we swap values up the tree until the order property has been restored. In the worst case, we follow a path all the way from a leaf to the root (i.e., the work we do is proportional to the height of the tree). Because a heap is a balanced binary tree, the height of the tree is O(log N), where N is the number of values stored in the tree. The removeMin operation is similar: in the worst case, we follow a path down the tree from the root to a leaf. Again, the worst-case time is O(log N).
Analysis:
The key of this problem is to analyze its complexity. Suppose the average number of nodes in all list is n
Solution1: Loop over the heads of all list and find the smallest one and add it to the final list. The worst case is O(k*kn), because you have to compare k times to get a smallest node and there are kn nodes in total.
Solution2: first merge all lists one by one as follows. The worst case is O(kn) because you only need to compare once to get a valid node for the final list.
Solution3: Use a Priority Queue. About the complexity of building a priority queue with an array, a linked list, a BST, or a Heap, please see
http://pages.cs.wisc.edu/~vernon/cs367/notes/11.PRIORITY-Q.html. The worst case is O(kn*logk). Here in the code notice that after you pop out a node node from the priority queue, you push node->next into the PQ; Therefore, the maximum number of nodes in the PQ is k and the insertion and deletion is both in O(logk).
Heap (minHeap):
For the insert operation, we start by adding a value to the end of the array (constant time, assuming the array doesn't have to be expanded); then we swap values up the tree until the order property has been restored. In the worst case, we follow a path all the way from a leaf to the root (i.e., the work we do is proportional to the height of the tree). Because a heap is a balanced binary tree, the height of the tree is O(log N), where N is the number of values stored in the tree. The removeMin operation is similar: in the worst case, we follow a path down the tree from the root to a leaf. Again, the worst-case time is O(log N).
Tuesday, April 22, 2014
Leetcode: Path Sum I&&II
Given a binary tree and a sum, determine if the tree has a root-to-leaf path such that adding up all the values along the path equals the given sum.
For example:
Given the below binary tree and
Solve the two problems with bug free.
1. For path sum II, use the feature of C++, pass both by value and reference.
2. Vector::push_back()actually creates a copy of the argument and stores it in the vector
For example:
Given the below binary tree and
sum = 22
, 5 / \ 4 8 / / \ 11 13 4 / \ \ 7 2 1return true, as there exist a root-to-leaf path
5->4->11->2
which sum is 22.Solve the two problems with bug free.
1. For path sum II, use the feature of C++, pass both by value and reference.
2. Vector::push_back()actually creates a copy of the argument and stores it in the vector
Using The [] Operator Efficiently With C++ unordered_map
operator[] will insert an entry for you with a default-constructed value, if one isn't already there. It is equivalent to, but will probably be implemented more efficiently than:
iterator iter = map.find(key);
if(iter == map.end())
{
iter = map.insert(value_type(key, int()));
}
return iter->second;
operator[] can be quicker than doing the work manually with find() and insert(), because it can save having to re-hash the key.
One way you can work around having multiple lookups in your code is to take a reference to the value:
int &stored_val = map[key];
// return the corresponding value if we find the key in the map - ie != 0
if (stored_val) return stored_val;
// if not in map
stored_val = value;
return value;
Note that if the value doesn't exist in the map, operator[] will default-construct and insert one. So while this will avoid multiple lookups, it might actually be slower if used with a type that is slower to default-construct + assign than to copy- or move-construct.
With int though, which cheaply default-constructs to 0, you might be able to treat 0 as a magic number meaning empty. This looks like it might be the case in your example.
If you have no such magic number, you've got two options. What you should use depends on how expensive it is for you to compute the value.
First, when hashing the key is cheap but computing the value is expensive, find() may be the best option. This will hash twice but only compute the value when needed:
iterator iter = map.find(key);
// return the corresponding value if we find the key in the map
if(iter != map.end()) return iter->second;
// if not in map
map.insert(value_type(key, value));
return value;
But if you've got the value already, you can do it very efficiently -- perhaps slighty more efficiently than using a reference + magic number as above:
pair iter = map.insert(value_type(key, value));
return iter->second;
If the bool returned by map.insert(value_type) is true, the item was inserted. Otherwise, it already existed and no modifications were made. The iterator returned points to the inserted or existing value in the map. For your simple example, this may be the best option.
iterator iter = map.find(key);
if(iter == map.end())
{
iter = map.insert(value_type(key, int()));
}
return iter->second;
operator[] can be quicker than doing the work manually with find() and insert(), because it can save having to re-hash the key.
One way you can work around having multiple lookups in your code is to take a reference to the value:
int &stored_val = map[key];
// return the corresponding value if we find the key in the map - ie != 0
if (stored_val) return stored_val;
// if not in map
stored_val = value;
return value;
Note that if the value doesn't exist in the map, operator[] will default-construct and insert one. So while this will avoid multiple lookups, it might actually be slower if used with a type that is slower to default-construct + assign than to copy- or move-construct.
With int though, which cheaply default-constructs to 0, you might be able to treat 0 as a magic number meaning empty. This looks like it might be the case in your example.
If you have no such magic number, you've got two options. What you should use depends on how expensive it is for you to compute the value.
First, when hashing the key is cheap but computing the value is expensive, find() may be the best option. This will hash twice but only compute the value when needed:
iterator iter = map.find(key);
// return the corresponding value if we find the key in the map
if(iter != map.end()) return iter->second;
// if not in map
map.insert(value_type(key, value));
return value;
But if you've got the value already, you can do it very efficiently -- perhaps slighty more efficiently than using a reference + magic number as above:
pair
return iter->second;
If the bool returned by map.insert(value_type) is true, the item was inserted. Otherwise, it already existed and no modifications were made. The iterator returned points to the inserted or existing value in the map. For your simple example, this may be the best option.
Saturday, April 19, 2014
LeetCode: Balanced Binary Tree
Given a binary tree, determine if it is height-balanced.
For this problem, a height-balanced binary tree is defined as a binary tree in which the depth of the two subtrees of every node never differ by more than 1.
The following solution used reference in C++ (needs to be tested)
For this problem, a height-balanced binary tree is defined as a binary tree in which the depth of the two subtrees of every node never differ by more than 1.
Solution:
Recursion. Check if balanced for each node.
The following solution used reference in C++ (needs to be tested)
Interface VS abstract class
A class must be declared abstract when it has one or more abstract methods. A method is declared abstract when it has a method heading, but no body – which means that an abstract method has no implementation code inside curly braces like normal methods do.
When to use abstract methods in Java?
Why you would want to declare a method as abstract is best illustrated by an example. Take a look at the code below:
/* the Figure class must be declared as abstract because it contains an abstract method */ public abstract class Figure { /* because this is an abstract method the body will be blank */ public abstract float getArea(); } public class Circle extends Figure { private float radius; public float getArea() { return (3.14 * (radius * 2)); } } public class Rectangle extends Figure { private float length, width; public float getArea(Figure other) { return length * width; } }
In the Figure class above, we have an abstract method called getArea(), and because the Figure class contains an abstract method the entire Figure class itself must be declared abstract. The Figure base class has two classes which derive from it – called Circle and Rectangle. Both the Circle and Rectangle classes provide definitions for the getArea method, as you can see in the code above.
But the real question is why did we declare the getArea method to be abstract in the Figure class? Well, what does the getArea method do? It returns the area of a specific shape. But, because the Figure class isn’t a specific shape (like a Circle or a Rectangle), there’s really no definition we can give the getArea method inside the Figure class. That’s why we declare the method and the Figure class to be abstract. Any classes that derive from the Figure class basically has 2 options: 1. The derived class must provide a definition for the getArea method OR 2. The derived class must be declared abstract itself.
A non abstract class is called a concrete class
You should also know that any non abstract class is called a concrete class. Knowing your terminology defintely pays off in an interview. Now that we’ve explored the abstract method/class concepts, let’s get into the concept of interfaces and how they differ from abstract classes.
Java interface versus abstract class
An interface differs from an abstract class because an interface is not a class. An interface is essentially a type that can be satisfied by any class that implements the interface.
Any class that implements an interface must satisfy 2 conditions:
- It must have the phrase "implements Interface_Name" at the beginning of the class definiton.
- It must implement all of the method headings listed in the interface definition.
This is what an interface called "Dog" would look like:
public interface Dog { public boolean Barks(); public boolean isGoldenRetriever(); }
Now, if a class were to implement this interface, this is what it would look like:
public class SomeClass implements Dog { public boolean Barks{ // method definition here } public boolean isGoldenRetriever{ // method definition here } }Now that we know the basics of interfaces and abstract classes, let’s get to the heart of the question and explore the differences between the two. Here are the three major differences:
Abstract classes and inheritance
1. Abstract classes are meant to be inherited from, and when one class inherits from another it means that there is a strong relationship between the 2 classes. For instance, if we have an abstract base class called "Canine", any deriving class should be an animal that belongs to the Canine family (like a Dog or a Wolf). The reason we use the word "should" is because it is up to the Java developer to ensure that relationship is maintained.
With an interface on the other hand, the relationship between the interface itself and the class implementing the interface is not necessarily strong. For example, if we have a class called "House", that class could also implement an interface called "AirConditioning". Having air conditioning not really an essential part of a House (although some may argue that point), and the relationship is not as strong as, say, the relationship between a “TownHouse” class and the "House" class or the relationship between an “Apartment” class that derives from a “House” class.
Because a TownHouse is a type of House, that relationship is very strong, and would be more appropriately defined through inheritance instead of interfaces.
So, we can summarize this first point by saying that an abstract class would be more appropriate when there is a strong relationship between the abstract class and the classes that will derive from it. Again, this is because an abstract class is very closely linked to inheritance, which implies a strong relationship. But, with interfaces there need not be a strong relationship between the interface and the classes that implement the interface.
Interfaces are a good substitute for multiple inheritance
2. Java does not allow multiple inheritance – see the discussion on Java Multiple Inheritance if you need a refresher on this. In Java, a class can only derive from one class, whether it’s abstract or not. However, a class can implement multiple interfaces – which could be considered as an alternative to for multiple inheritance. So, one major difference is that a Java class can inherit from only one abstract class, but can implement multiple interfaces.
Abstract classes can have some implementation code
3. An abstract class may provide some methods with definitions – so an abstract class can have non-abstract methods with actual implementation details. An abstract class can also have constructors and instance variables as well. An interface, however, can not provide any method definitions – it can only provide method headings. Any class that implements the interface is responsible for providing the method definition/implementation.
When to use abstract class and interface in Java
Here are some guidelines on when to use an abstract class and when to use interfaces in Java:
- An abstract class is good if you think you will plan on using inheritance since it provides a common base class implementation to derived classes.
- An abstract class is also good if you want to be able to declare non-public members. In an interface, all methods must be public.
- If you think you will need to add methods in the future, then an abstract class is a better choice. Because if you add new method headings to an interface, then all of the classes that already implement that interface will have to be changed to implement the new methods. That can be quite a hassle.
- Interfaces are a good choice when you think that the API will not change for a while.
- Interfaces are also good when you want to have something similar to multiple inheritance, since you can implement multiple interfaces.
Friday, April 18, 2014
Sort in Java
While analyzing source code of a large number of open source Java projects, I found Java developers frequently sort in two ways. One is using the
sort()
method of Collections
or Arrays
, and the other is using sorted data structures, such as TreeMap
and TreeSet
.Using sort() Method
If it is a collection, use Collections.sort() method.
// Collections.sort List<ObjectName> list = new ArrayList<ObjectName>(); Collections.sort(list, new Comparator<ObjectName>() { public int compare(ObjectName o1, ObjectName o2) { return o1.toString().compareTo(o2.toString()); } }); |
If it is an array, use Arrays.sort() method.
// Arrays.sort ObjectName[] arr = new ObjectName[10]; Arrays.sort(arr, new Comparator<ObjectName>() { public int compare(ObjectName o1, ObjectName o2) { return o1.toString().compareTo(o2.toString()); } }); |
This is very convenient if a collection or an array is already set up.
Using Sorted Data Structures
If it is a list or set, use TreeSet to sort.
// TreeSet Set<ObjectName> sortedSet = new TreeSet<ObjectName>(new Comparator<ObjectName>() { public int compare(ObjectName o1, ObjectName o2) { return o1.toString().compareTo(o2.toString()); } }); sortedSet.addAll(unsortedSet); |
If it is a map, use TreeMap to sort. TreeMap is sorted by key.
// TreeMap - using String.CASE_INSENSITIVE_ORDER which is a Comparator that orders Strings by compareToIgnoreCase Map<String, Integer> sortedMap = new TreeMap<String, Integer>(String.CASE_INSENSITIVE_ORDER); sortedMap.putAll(unsortedMap); |
//TreeMap - In general, defined comparator Map<ObjectName, String> sortedMap = new TreeMap<ObjectName, String>(new Comparator<ObjectName>() { public int compare(ObjectName o1, ObjectName o2) { return o1.toString().compareTo(o2.toString()); } }); sortedMap.putAll(unsortedMap); |
This approach is very useful, if you would do a lot of search operations for the collection. The sorted data structure will give time complexity of O(logn), which is lower than O(n).
Bad Practices
There are still bad practices, such as using self-defined sorting algorithm. Take the code below for example, not only the algorithm is not efficient, but also it is not readable. This happens a lot in different forms of variations.
double t; for (int i = 0; i < 2; i++) for (int j = i + 1; j < 3; j++) if (r[j] < r[i]) { t = r[i]; r[i] = r[j]; r[j] = t; } |
LeetCode: Recover Binary Search Tree
Two elements of a binary search tree (BST) are swapped by mistake.
Recover the tree without changing its structure.
Note:
A solution using O(n) space is pretty straight forward. Could you devise a constant space solution?
Key Point: 1. the sequence of values should be in the ascending order with in-order traversal
2. The previous node should be held
3. It's not really O(1) space since the recursion stack needs O(n) extra space
I implemented it in Java with O(N) space. Using two ArrayLists to hold the sequence of nodes and values.
Recover the tree without changing its structure.
Note:
A solution using O(n) space is pretty straight forward. Could you devise a constant space solution?
Solution One
Key Point: 1. the sequence of values should be in the ascending order with in-order traversal
2. The previous node should be held
3. It's not really O(1) space since the recursion stack needs O(n) extra space
Solution Two
I implemented it in Java with O(N) space. Using two ArrayLists to hold the sequence of nodes and values.
Trie
In computer science, a trie, also called digital tree and sometimes radix tree or prefix tree (as they can be searched by prefixes), is an ordered tree data structure that is used to store a dynamic set or associative array where the keys are usually strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are normally not associated with every node, only with leaves and some inner nodes that correspond to keys of interest. For the space-optimized presentation of prefix tree, see compact prefix tree.
As discussed below, a trie has a number of advantages over binary search trees.[4] A trie can also be used to replace a hash table, over which it has the following advantages:
- Looking up data in a trie is faster in the worst case, O(m) time (where m is the length of a search string), compared to an imperfect hash table. An imperfect hash table can have key collisions. A key collision is the hash function mapping of different keys to the same position in a hash table. The worst-case lookup speed in an imperfect hash table is O(N) time, but far more typically is O(1), with O(m) time spent evaluating the hash.
- There are no collisions of different keys in a trie.
- Buckets in a trie, which are analogous to hash table buckets that store key collisions, are necessary only if a single key is associated with more than one value.
- There is no need to provide a hash function or to change hash functions as more keys are added to a trie.
- A trie can provide an alphabetical ordering of the entries by key.
- Tries can be slower in some cases than hash tables for looking up data, especially if the data is directly accessed on a hard disk drive or some other secondary storage device where the random-access time is high compared to main memory.[5]
- Some keys, such as floating point numbers, can lead to long chains and prefixes that are not particularly meaningful. Nevertheless a bitwise trie can handle standard IEEE single and double format floating point numbers.
- Some tries can require more space than a hash table, as memory may be allocated for each character in the search string, rather than a single chunk of memory for the whole entry, as in most hash tables.
An example:
Given a list of phone numbers, determine if it is consistent in the sense that no number is the prefix of another. Let's say the phone catalog listed these numbers:
Emergency 911
Alice 97 625 999
Bob 91 12 54 26
In this case, it's not possible to call Bob, because the central would direct your call to the emergency line as soon as you had dialled the first three digits of Bob's phone number. So this list would not be consistent.
Input
The first line of input gives a single integer, 1 ≤ t ≤ 10, the number of test cases. Each test case starts with n, the number of phone numbers, on a separate line, 1 ≤ n ≤ 100000. Then follows n lines with one unique phone number on each line. A phone number is a sequence of at most ten digits.
Output
For each test case, output "YES" if the list is consistent, or "NO" otherwise.
Solution at http://blog.csdn.net/waljl/article/details/9115449
Wednesday, April 16, 2014
Word Break II
Given a string s and a dictionary of words dict, add spaces in s to construct a sentence where each word is a valid dictionary word.
Return all such possible sentences.
For example, given
s =
dict =
s =
"catsanddog"
,dict =
["cat", "cats", "and", "sand", "dog"]
.A solution is
Solution 1
A straightforward way is to get all solutions by backtracking the DP table. As follows:
["cats and dog", "cat sand dog"]
.Solution 1
A straightforward way is to get all solutions by backtracking the DP table. As follows:
However, it cannot pass the large cases.
Got another way to solve this by mingling DP and Backtrack. Slight changes to the above code will pass all test cases.
Tuesday, April 15, 2014
After PostBack - back to same place on page - ASP.NET
MaintainScrollPositionOnPostback ="true"
Page.MaintainScrollPositionOnPostback = true; should take you back to the same position on the screen, but you could use AJAX, or you could use SetFocus() to focus on a specific control after the postback:
Monday, April 14, 2014
LeetCode: Spiral Matrix I && II
Spiral Matrix I
Given a matrix of m x n elements (m rows, n columns), return all elements of the matrix in spiral order.
For example,
Given the following matrix:
[
[ 1, 2, 3 ],
[ 4, 5, 6 ],
[ 7, 8, 9 ]
]
You should return [1,2,3,6,9,8,7,4,5].
这个问题难点在外层for循环的结束条件. Have to pay attention to Line 27 and Line 31 to break the outer loop so as to deal with matrix like [[1,11],[2, 12],[3, 13],[4, 14],[5, 15],[6, 16],[7, 17]]. In addition, It doesn't affect the result if commenting out line 10 to line 14, but line 16 to line 21 are necessary
Spiral Matrix II
Given an integer n, generate a square matrix filled with elements from 1 to n2 in spiral order.
For example,
Given n = 3,
You should return the following matrix:
[
[ 1, 2, 3 ],
[ 8, 9, 4 ],
[ 7, 6, 5 ]
]
Not difficult in the algorithm, but need to be carefully about the index.
Had a little problem with the two dimensional vector. In order to access it with [][], matrix has to be initialized. Vecter<int> v(n) is to set the length of v to n.
Vector
// inserting into a vector
#include <iostream>
#include <vector>
int main ()
{
std::vector<int> myvector (3,100);
std::vector<int>::iterator it;
it = myvector.begin();
it = myvector.insert ( it , 200 );
myvector.insert (it,2,300);
// "it" no longer valid, get a new one:
it = myvector.begin();
std::vector<int> anothervector (2,400);
myvector.insert (it+2,anothervector.begin(),anothervector.end());
int myarray [] = { 501,502,503 };
myvector.insert (myvector.begin(), myarray, myarray+3);
std::cout << "myvector contains:";
for (it=myvector.begin(); it<myvector.end(); it++)
std::cout << ' ' << *it;
std::cout << '\n';
return 0;
}
http://www.cplusplus.com/reference/vector/vector/insert/
Two Dimensional Vector
This tutorial will show you how to create a vector matrix, much like an array matrix, for storing data in a two dimensional format. An knowledge of arrays, vectors, loops, and basic C++ programming is necessary to fully grasp the material covered in this tutorial. If you have dealt in array's for a while, you have probably come across two dimensional arrays (matrix). These arrays are often used for storing information in tables or anything where you would want to have two dimensions for data. Since matrices have been so helpful in arrays, making a two dimensional vector would be even better. However, there are a few tricks to making the two dimensional vector. Declaring a two dimensional vector is similar to declaring an array. To start off, you declare your vector of whatever type you want. Use the following code to declare your vector:You might think that you can fill in the vectors right now, much like you would an array matrix, but one more step must be taken. Because there are no dimensions assigned for the vector matrix right now, the rows of the matrix don't actually exist yet. What we have after the code we just typed in is an empty vector that can hold vectors of integers. To create the rows of the vector, we have to use a loop like the following:
vector<vector <int> > vec;This statement will fill the vector "vec" with empty vectors that can contain integers. Now, we can use an assignment statement to fill a part of the matrix:
for(int i = 0; i < 5; i++) { vector<int> row; vec.push_back(row); }And, as you can see by outputting the part of the matrix that was filled, the data was correctly assigned:
vec[0].push_back(5);Tada! You have now created a two dimensional vector. This code was written and compiled in Dev-C++ and may need some tweaking to run in other compilers. The source code for this project is attached for convenience. If you would like to offer corrections or contribute to the information presented here, please post a reply.
cout << vec[0][0] << endl;
Tuesday, April 8, 2014
Friday, April 4, 2014
How to Write the Perfect Cover Letter for Any Job
https://www.ziprecruiter.com/blog/2013/06/13/write-perfect-cover-letter/
Most job seekers seem to take for granted
the impact of a well-written cover letter, thinking they are better off
copying templates from websites. But the fact of the matter remains that
a cover letter is not a mere “blanket” document, and a well-crafted one
makes a great deal of difference to the outcome of your job search.
The cover letter’s main purpose is to catch an employer’s interest
and entice him to read through your resume and call you for an
interview. In fact, some employers even decide to hire a person only
because they liked the cover letter. Here are tried and tested tips on
how you could write the perfect cover letter for any job:1) Make the opening line memorable
The problem with most job seekers is that their opening sentences are cliché. Statements like– “Please accept my application to the job you posted” or “This is a reply to the position you advertised” are just common. They won’t catch the employer’s attention. The point is, your cover letter must stand out. It is your sales pitch sans the pushy or needy tone.Take this for example: “With 6 years of goal-oriented experience in providing quality nursing care in a medical-surgical ward setting, I am offering…” The thing about this opening line is that it already addresses the need of the employer. It goes straight to the point and goes clear with its purpose: you are applying for the position.
2) Don’t make the letter a narrative of your resume
If you do, it would just become your resume in the form of prose. Since the information is already redundant, the employer would skip reading your letter and proceed with the resume, since it presents your credentials more clearly. A very long cover letter would be harder to read and time-wasting. What you should include in the letter is information that could provide additional insight on your application.3) Express genuine interest
Aside from giving the employer an idea of who you are and the skills that you possess, the cover letter must also express your genuine interest in their business and the particular role you’re trying to take. Expressing your interest also helps in making you stand out among other candidates, thus you must have a clear understanding of the operations of the business of your employer, and how you could contribute to it. However, you should also not overdo how you express your interest. Avoid flowery phrases and too much superlatives; some employers get turned off by them.4) State the skills that you have for the position
Although it’s not necessary to mention all of your credentials, you have to put forward the skills that make you the perfect fit for the job. State how your capabilities could meet the company’s needs and expectations. It would also help if you could describe how the skills made a difference to the previous companies you have worked with.Here’s an example: “As current media buyer in this advertising agency, I was responsible for communicating with TV and radio networks in all 31 designated market areas and purchasing primetime TV and radio advertising inventory for our clients. Due to my expert handling of media buys, I was able to…” This example not only provides detail on how the applicant utilizes specific skills for the job, but it also shows the result of her tasks. Letting the employer know about some details of your previous job will give him enough insight and will interest him to look at your application more thoroughly.
5) Don’t use bullet points when you state your credentials
Bullets can shorten your resume, but a paragraph can help your employer gain insight on how you understand the job. Take a look at this nursing job application:“Here are the skills that make me a perfect match to your job:
- Excellent and comprehensive assessment of client needs
- Setting SMART goals and objectives
- Evidence-based implementation of nursing care
- Evaluation of goal-based outcomes”
“My key strength for the job is my expert application of the nursing process through excellent and comprehensive assessment of various needs of my clients. The data gathered allows me to formulate specific, measurable, attainable, realistic, and time-bound goal outcomes and implement the nursing care plan through evidence-based practice. The set goal outcomes also serve as a basis for me to evaluate the care I rendered.”
6) State something positive about the company you’re applying to
You can express why you want to be part of their prestigious organization. You can state the company’s management philosophy, reputation, sales record, size, product, or any other factors that make the company impressive. This will let them know that you have high regard for them. A little flattery can work for you.7) Keep it short and concise
A short cover letter works beautifully as long as the necessary elements are there. As previously mentioned, don’t make your letter a narrative of your resume. Keep in mind that the cover letter is not intended to explain everything about you; it just needs to attract the employer and let him take a close look at your application.8) Close your letter in a professional manner
Although you express your interest in applying for the job, you don’t want the employer to get the impression that you’re desperate for the job. A great opening is required, but the right closing is equally important. In some cases, it’s the bad closing that defeats the applicant:“Thank you very much for considering my application. I promise that you will only have the best from me once I get accepted for the job.”
“Thank you very much for considering me for the job. This application is very important to me, and it will definitely contribute to my personal development.”
Here’s an example of a good closing:
“I have attached my resume for your review. Thank you very much and I hope this merits your favorable response.”
9) Make your letter readable
Make sure you use readable fonts (fancy fonts such as freestyle won’t do you any good). Use clean, white, A4 sized paper. Leave adequate space around the edges of the page and between each section and paragraph. Don’t submit photocopied or marked letters– always keep it neat. Remember– even the physical appearance of the letter speaks for you.While there’s no single template that can give a perfect cover letter, practicing would definitely help you in creating a cover letter that works perfectly for you. Make rough drafts first, so that you can have your thoughts organized. Have another person read your letter, and take note of comments. Finally, keep copies of all the cover letters you sent; they can be used as basis for improving your next cover letter.
Subscribe to:
Posts (Atom)