Friday, December 21, 2012

WebForms coding standards

Recently I am working with a WebForms environment, and each time when I go back to WebForms stack, I am surprise to realize that programmers that are in this world do not realize so many things that are obvious to other frameworks. Like naming conventions. Why do I see things like:
txtFirstName
btnSubmit
lblLastName
This is totally wrong, and against Microsoft coding standards. I also see this in WinForms environment, programmers somehow do not realize that by doing it you:
Code to a type/control not to a content, you should be able to change the control easily from any type to any type without working of a variable name, here when you change a type from label to literal you need to change a name everywhere you use it.
Besides that there are so many new controls, or custom controls, it is so hard to create a prefix for each of them, and if you use prefix only for a buildin/default controls you are being inconsistent and code looks like a crap.

Thursday, December 20, 2012

Resharper and the end of the world

21 December 2012, is the last day of the world. That's why for last 3 hours I am trying to upgrade my Resharper from version 6 to version 7, because the price is 75% off. Unfortunately for the last 3 hours the only response that I got was
But I see a progress! Now I am getting:
I don't have much time - it's still 20 more hours to go till the end of the world.

Monday, December 17, 2012

RSA Security commonly uses keys of sizes 1024-bit, 2048-bit or even 3072-bit. And most Symmetric algorithms only between 112-bit and 256-bit.

The ultimate question is should I use a longer key. And ladies and gentleman, here is an answer from Bruce Schneier's book

Longer key lengths are better, but only up to a point. AES will have 128-bit, 192-bit, and 256-bit key lengths. This is far longer than needed for the foreseeable future. In fact, we cannot even imagine a world where 256-bit brute force searches are possible. It requires some fundamental breakthroughs in physics and our understanding of the universe.

One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)

Given that k = 1.38 × 10−16 erg/K, and that the ambient temperature of the universe is 3.2 Kelvin, an ideal computer running at 3.2 K would consume 4.4 × 10−16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.

Now, the annual energy output of our sun is about 1.21 × 1041 ergs. This is enough to power about 2.7 × 1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter.

But that's just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.

These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.


An excellent explanation

Thursday, December 13, 2012

Disposable interface - how do I know

Many developers that I met had a problem to figure out if a class that they used implemented IDisposable interface. Consider a following code:
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
public partial class ContentHubDataCacheSoapClient : System.ServiceModel.ClientBase, ContentHubDataCacheSoap {
}
When I create an instance of ContentHubDataCacheSoapClient should I dispose it?

What Developers typically did is used Object Browser to see if there is a method called Dispose, see a picture below for an example matching a code above :)
It's easy to see that there is no Dispose method listed there, but I circled in the red where one can see if a class implements IDisposable interface.

Other developers to answer a question suggested that IDisposable can be satisfy by implementing a Close method (this is not true, see code example below).

But why Dispose method was not listed in a list of methods. The answer is as simple as to understand that one can implement a method explicitly naming an interface that requires it, and thanks to that it will not be listed in Object Browser. An example is below.
public class DisposingClass : IDisposable
    {
        public void Dispose() { }
    }

    public class ClosingClass{
        public void Close() { }
    }

    public class ImplementingDisposableInterfaceClass : IDisposable {
        void IDisposable.Dispose() {
            Close();
        }

        public void Close() { }
    }

    public class ChildClass : ImplementingDisposableInterfaceClass { }

    public class UsingClass {
        public void UsingMethod() {
            // Compilation time exception, IDisposable needs to implement IDisposable.
            using (var c = new ClosingClass()) { 
            }

            // Typical way of implementing Disposable.
            using (var d = new DisposingClass())
            {
            }

            using (var d = new ImplementingDisposableInterfaceClass())
            {
            }

            // Dispose method will not be showed in Object Browser
            using (var d = new ChildClass())
            {
            }
        }
    }

Tuesday, December 11, 2012

Dealing with a hanged SQL Backup

The backup process is triggered by a SQL Server Job. One can see what SQL Server Jobs are currently running by executing a following query:
exec msdb..sp_help_job @execution_status = 1
In order to see all the queries that are executed we can use sp_who, inside cmd column we should see a BACKUP string. sp_who query also enables us to know what SPID the backup process has
exec sp_who
And a query below shows what is a status of a backup process - like displays estimated completion, and its time.
SELECT r.session_id,r.command,CONVERT(NUMERIC(6,2),r.percent_complete)
AS [Percent Complete],CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2,
CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)
FROM sys.dm_exec_sql_text(sql_handle)))
FROM sys.dm_exec_requests r WHERE command IN ('RESTORE DATABASE','BACKUP DATABASE')

Files created by jobs run by Task Scheduler

On Windows Server 2008 R2 Enterprise 64bit, if a Task Scheduler run a task that creates a folder or a file in a current execution path it will be created in:
C:\Windows\SysWOW64\
It means that if you run in a task a following code:
using (var file = new System.IO.StreamWriter("foo.txt"))
{
 file.WriteLine("bar");
}
The file "foo.txt" will be created in a following path
C:\Windows\SysWOW64\foo.txt

Monday, December 10, 2012

Using, Exceptions and Ctrl+C

Many people believe that when they use a using statement a Dispose method is always going to be called, and they are safe to clear resources there. For investigation reasons I also included Catch and Finally statements, because Using statement can be thought of as a following code:
try
{
    // A code inside using statement
}finall{
    // A dispose method
}
As seen in an example above using is not rethrowing an exception, or stopping it, it is just letting it go.

Let us consider a code below:
static void Main(string[] args)
{
    Console.CancelKeyPress += new ConsoleCancelEventHandler(myHandler);

    try
    {
        using (new DisposableClass())
        {
            while (true) { 

            }
        }
    }
    catch
    {
        Console.WriteLine("Catch");
    }
    finally {
        Console.WriteLine("Finally");
    }
}

protected static void myHandler(object sender, ConsoleCancelEventArgs args)
{
    Console.WriteLine("myHandler intercepted");
}
Where a DisposableClass is listed below
public class DisposableClass : IDisposable
{
    public void Dispose() {
        Console.WriteLine("Dispose");
    }
}
Many people forget that there are other methods to interrupt an application than exceptions (i.e. forcefully aborting a thread). An example is an good old SIGINT signal, also known as Ctr+C. If application is executing a code above, and is in a while loop. And someone presses Ctr+C than Dispose method, catch, finally blocks are not going to be called. If there is a need to intercept this signal, one needs to signup for a CancelKeyPress event, just as in a code above. In other words, after pressing Ctr+C the only line that is going to be displayed is:
myHandler intercepted
A next example of not executing Using/Catch/Finally block is by running
Thread.Abort()
For majority of cases the block will be executed, but when a thread is nearly finished, and it entered a Finally block, and then someone calls Thread.Abort(), then Finally block is not going to be executed.

Autofac beta and dependencies

I created a projected that used a new version of autofac installed via nuget
Install-Package Autofac -Pre
Everything was working fine until I deployed the project to a production. And then, I saw an error:
Unhandled Exception: System.IO.FileLoadException: Could not load file or assembly 'System.Core, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes' or one of its dependencies. The given assembly name or code
base was invalid. (Exception from HRESULT: 0x80131047)
   at Autofac.Builder.RegistrationData..ctor(Service defaultService)
   at Autofac.Builder.RegistrationBuilder`3..ctor(Service defaultService, TActiv
atorData activatorData, TRegistrationStyle style)
   at Autofac.Builder.RegistrationBuilder.ForType[TImplementer]()
   at Autofac.RegistrationExtensions.RegisterType[TImplementer](ContainerBuilder
 builder)
   at TTC.ContentHubDataCache.ContainerSetup.BuildContainer() in C:\Development\
BrandWebsites\TrafalgarTools\TTC.ContentHubDataCache\TTC.ContentHubDataCache\Con
tainerSetup.cs:line 26
   at TTC.ContentHubDataCache.UpdateDataCacheProcess..ctor() in C:\Development\B
randWebsites\TrafalgarTools\TTC.ContentHubDataCache\TTC.ContentHubDataCache\Upda
teDataCacheProcess.cs:line 58
   at TTC.ContentHubDataCache.Program.Main(String[] args) in C:\Development\Bran
dWebsites\TrafalgarTools\TTC.ContentHubDataCache\TTC.ContentHubDataCache\Program
.cs:line 9
It looked like autofac is referencing a System.Core in a really old version. A quick look at Autofac.dll dependencies in ILDASM under MANIFEST section showed that:
.assembly extern retargetable System.Core
{
  .publickeytoken = (7C EC 85 D7 BE A7 79 8E )                         // |.....y.
  .ver 2:0:5:0
}
The beta version of autofac (Autofac 3.0.0-beta) is using an old System.Core, it is build against .NET in a version 4.0 but yet it is using System.Core in version 2.0, how bizarre. I uninstalled autofac in this version, and took an older one
uninstall-package autofac
Install-Package Autofac -Version 2.6.3.862
A quick check at dependencies
.assembly extern System.Core
{
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89 )                         // .z\V.4..
  .ver 4:0:0:0
}
Looks good, and it solved my problem, but why they used a System.Core in version 2.0.5.0 I do not know, probably they haven't noticed yet.

Monday, December 3, 2012

Measuring SQL Query execution time

It is not recommended to measure SQL execution time on a DB, wise guys believe that it is much more meaningful to run performance tests from an application, so the response time will also include a network delay, and SQL provider computation time. In my scenario, I do not have an access to an application, and there are many entry points for a one SQL Query. That is why it is optimized on a DB side. A query below displays performance metrics of a query, in my example I am interested with each query that includes 'VersionedFields' string.
select top 40 * from sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt where qt.text like '%VersionedFields%' order by last_execution_time desc
'Elapsed time' is a most important metrics for me. Much more low level way to measure a query response time is to use STATISTICS
DBCC DROPCLEANBUFFERS
SET STATISTICS IO ON 
GO
SET STATISTICS TIME ON
GO

-- SQL Query goes here like, SELECT * FROM VersionedFields

DBCC DROPCLEANBUFFERS
SET STATISTICS IO OFF
GO
SET STATISTICS TIME OFF
GO
Notes regarding DROPCLEANBUFFERS. I only run it on a test bed, not on a production.
Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server.
To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache. This forces all dirty pages for the current database to be written to disk and cleans the buffers. After you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all buffers from the buffer pool.
If one really wants to clear entire cache, a CHECKPOINT should be also used.
[CHECKPOINT] Writes all dirty pages for the current database to disk. Dirty pages are data pages that have been entered into the buffer cache and modified, but not yet written to disk. Checkpoints save time during a later recovery by creating a point at which all dirty pages are guaranteed to have been written to disk.
I do not run checkpoint that often.

Shrinking Log file in SQL Server 2008

Procedure of shrinking DB log file (transaction log file). In SQL Server Management Studio
Right click on a DB, Properties -> Options-> Recovery model -> change from 'Full' to 'Simple' -> OK
Right click on a DB -> Tasks -> Shrink -> Files ->  File type -> Log -> OK
The shrinking procedure should not take more than 3 s. It is not possible to change DB Recovery mode this way if a mirroring is setup. Now, some rules of shrinking and maintaining log file.
  • By default Recovery model is set to FULL
  • If you store in a DB crucial/important information, then recovery model should be set to FULL
  • If recovery model is set to FULL, it means that there should be a backup in place, a backup that also includes a transaction log file
  • When a backup for a transaction log file runs, the transaction log file is shrink. The truncation occurs after a Checkpoint process
  • So if your transaction log is big, like 7 GB, it means that you:
    • Don't have a backup that includes transaction log fie. And it means that you don't need FULL recovery mode
    • Your backup is not working
    • You have a big and heavily used database

Tuesday, November 27, 2012

Host and Dig under Windows

Recently I had some DNS problems, I was looking for a Host and Dig command under windows. There is an excellent port that also includes whois command. I totally forgot that under windows there is nslookup (I am getting too old for this). But I figured out that I know how to do it based on system.net library, so I used PowerShell to resolve a host:
[System.Net.Dns]::GetHostAddresses("www.google.com")
I always keep in my memory a public google DNS server.
google public DNS: 8.8.8.8
But unfortunately there is no way for .NET to specify what DNS server use to resolve a host. The reason for that is because DNS.Resolve method relies on the internal Win32 APIs which in turn go through the DNS servers associated with the network connection. In order to change a DNS server one needs to change and configure a network adapter. Ech... I can tell that I am more a developer then admin :)

Thursday, November 22, 2012

A great query to identify missing indexes and print it as a create index query. They are ordered by a Total Cost
SELECT  TOP 10 
        [Total Cost]  = ROUND(avg_total_user_cost * avg_user_impact * (user_seeks + user_scans),0) 
        , avg_user_impact
        , TableName = statement
        , [EqualityUsage] = equality_columns 
        , [InequalityUsage] = inequality_columns
        , [Include Cloumns] = included_columns
FROM        sys.dm_db_missing_index_groups g 
INNER JOIN    sys.dm_db_missing_index_group_stats s 
       ON s.group_handle = g.index_group_handle 
INNER JOIN    sys.dm_db_missing_index_details d 
       ON d.index_handle = g.index_handle
ORDER BY [Total Cost] DESC;
Where Total Cost stands for:
total cost=(avg_total_user_cost *avg_user_impact *(user_seeks +user_scans))/1000'000
  • avg_total_user_cost – Average cost of the user queries that could be reduced by the index in the group
  • avg_user_impact – Average percentage benefit that user queries could experience if this missing index group was implemented. The value means that the query cost would on average drop by this percentage if this missing index group was implemented
  • user_seeks – Number of seeks caused by user queries that the recommended index in the group could have been used for
  • user_scans – Number of scans caused by user queries that the recommended index in the group could have been used for
Below is a nice query to generate an actual command for an index creation:
PRINT 'Missing Indexes: '
PRINT 'The "improvement_measure" column is an indicator of the (estimated) improvement that might '
PRINT 'be seen if the index was created. This is a unitless number, and has meaning only relative '
PRINT 'the same number for other indexes. The measure is a combination of the avg_total_user_cost, '
PRINT 'avg_user_impact, user_seeks, and user_scans columns in sys.dm_db_missing_index_group_stats.'
PRINT ''
PRINT '-- Missing Indexes --'
SELECT CONVERT (varchar, getdate(), 126) AS runtime,
  mig.index_group_handle, mid.index_handle,
  CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) AS improvement_measure,
  'CREATE INDEX missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle)
  + ' ON ' + mid.statement
  + ' (' + ISNULL (mid.equality_columns,'')
    + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '')
  + ')'
  + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement,
  migs.*, mid.database_id, mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC
PRINT ''
GO 

Tuesday, November 20, 2012

Thread, Wait and debugging Sitecore under WinDbg

When running a command to display all threads and stack assigned to a thread it is common to see a following picture:
~*e !CLRStack
...
OS Thread Id: 0x10ec (37)
Child-SP         RetAddr          Call Site
000000000c4de8e0 000007ff006e168f Sitecore.IO.FileWatcher.Worker()
000000000c4de930 000007ff0100e510 System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
000000000c4de980 000007fef96ceb52 System.Threading.ThreadHelper.ThreadStart()
OS Thread Id: 0x10f8 (38)
...
OS Thread Id: 0x1268 (43)
Child-SP         RetAddr          Call Site
000000000ba5e4e0 000007ff01298285 System.Threading.WaitHandle.WaitAny(System.Threading.WaitHandle[], Int32, Boolean)
000000000ba5e540 000007ff006e168f System.Net.TimerThread.ThreadProc()
000000000ba5e610 000007ff0100e510 System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
000000000ba5e660 000007fef96ceb52 System.Threading.ThreadHelper.ThreadStart()
...
OS Thread Id: 0x1368 (46)
Child-SP         RetAddr          Call Site
000000000cf7e5c0 000007ff00f6ed56 System.Threading.Thread.Sleep(System.TimeSpan)
000000000cf7e600 000007ff006e168f Sitecore.Services.Heartbeat.WorkLoop()
000000000cf7e6c0 000007ff0100e510 System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
000000000cf7e710 000007fef96ceb52 System.Threading.ThreadHelper.ThreadStart()
...
the process
OS Thread Id: 0x1008 (49)
Child-SP         RetAddr          Call Site
000000000d03e5f0 000007ff0163a0b7 Sitecore.Threading.Semaphore.P()
000000000d03e640 000007ff006e168f Sitecore.Threading.ManagedThreadPool.ProcessQueuedItems()
000000000d03e6a0 000007ff0100e510 System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
000000000d03e6f0 000007fef96ceb52 System.Threading.ThreadHelper.ThreadStart()
...
Each of listed above stack traces appear few times in a memory dump. When analyzing any of those stacks there is no magic or suspicious staff. This is how Sitecore was designed and how it works.

PerfMon typical counters

A list of general counters that I usually use. I set up a PerfMon to run in a background and log data.
Web Service->Current Connections->Publishing-shared
Web Service->Get Requests/sec->Publishing-shared
Web Service->Current Anonymous Users->Publishing-shared
System -> Processor Queue Length
Processor -> % Processor Time->_Total
Process -> Working Set->_Total
PhysicalDisk -> Current Disk Queue Length->_Total
PhysicalDisk -> Disk Bytes/sec->_Total
Network Interface-> Bytes Total/sec->[Select network adapter]
Network Interface-> Output Queue Length->[Select network adapter]
Network Interface-> Packets Received Errors->[Select network adapter]
Memory -> Available MBytes
Memory -> Pages Input/sec

Tuesday, November 13, 2012

IIS logs

Some people tried to convince me that by default IIS does not log response time. IIS supports 3 types of logging formats:
  • IIS
  • NCSA
  • W3C
  • Custom
And by default W3C is used. W3C is the only one not including Custom, that allowed you to specify fields that are logged. By default it's configuration looks like below:

The filed that is responsible for logging a response time is time-taken, and by default it is turned on.
And here is one more query that I found useful, I tend to run it with GET, POST, and not like GET or POST, to compare what type of requests are hitting my site.
logparser "SELECT count(cs-uri-stem) FROM u_ex121103.log where cs-method = 'GET'"

Monday, November 12, 2012

Analyzing IIS logs

Some useful badly written queries to analyze IIS logs, when something wrong is going on. Logparser needs to be installed.
Mostly hit resource on a server, filtered by media files
logparser -i:IISW3C "SELECT TOP 10 cs-uri-stem AS Url, MIN(time-taken) as [Min], AVG(time-taken) AS [Avg], max(time-taken) AS [Max], count(time-taken) AS Hits FROM  u_ex121025.log TO 'MostHitResourcesFiltered121025.csv' WHERE cs-uri-stem NOT LIKE '%media%' AND cs-uri-stem NOT LIKE '%.swf' AND cs-uri-stem NOT LIKE '%.jpg' AND cs-uri-stem NOT LIKE '%.mp3' AND cs-uri-stem NOT LIKE '%.js' AND cs-uri-stem NOT LIKE '%.woff' AND cs-uri-stem NOT LIKE '%.css' AND cs-uri-stem NOT LIKE '%.png' AND cs-uri-stem NOT LIKE '%.gif' AND cs-uri-stem NOT LIKE '%.eot' AND cs-uri-stem NOT LIKE '%.ico' GROUP BY Url ORDER BY [Hits] DESC"  -o:csv

Requests that took longest time to anwser - I like to order it by Avg, Min and Max. Below just sorted by Avg example.
logparser -i:IISW3C "SELECT TOP 10 cs-uri-stem AS Url, MIN(time-taken) as [Min], AVG(time-taken) AS [Avg], max(time-taken) AS [Max], count(time-taken) AS Hits FROM  u_ex121025.log  TO 'Avg121025.csv' WHERE cs-uri-stem NOT LIKE '%media%' AND cs-uri-stem NOT LIKE '%.swf' AND cs-uri-stem NOT LIKE '%.jpg' AND cs-uri-stem NOT LIKE '%.mp3' AND cs-uri-stem NOT LIKE '%.js' AND cs-uri-stem NOT LIKE '%.woff' AND cs-uri-stem NOT LIKE '%.css' AND cs-uri-stem NOT LIKE '%.png' AND cs-uri-stem NOT LIKE '%.gif' AND cs-uri-stem NOT LIKE '%.eot' GROUP BY Url HAVING Hits > 5 ORDER BY [Avg] DESC" -o:csv

When I am interested in some specific site, and I want to know as much as I can about it. In example below, I am interested in a url that has 'last' string inside it.
logparser -i:IISW3C "SELECT date, time, s-ip, cs-method, cs-uri-stem, cs-uri-query, s-port, cs-username, c-ip, cs(User-Agent), sc-status, sc-substatus, sc-win32-status, time-taken FROM  u_ex121103.log  TO 'WhoHitLMO121103.csv' WHERE cs-uri-stem NOT LIKE '%media%' AND cs-uri-stem NOT LIKE '%.swf' AND cs-uri-stem NOT LIKE '%.jpg' AND cs-uri-stem NOT LIKE '%.mp3' AND cs-uri-stem NOT LIKE '%.js' AND cs-uri-stem NOT LIKE '%.woff' AND cs-uri-stem NOT LIKE '%.css' AND cs-uri-stem NOT LIKE '%.png' AND cs-uri-stem NOT LIKE '%.gif' AND cs-uri-stem NOT LIKE '%.eot' AND cs-uri-stem LIKE '%last%'" -o:csv

A load testing tool

JMeter takes to much time to set up, usually I am not interested in results or response time. The simplest way to load test an app, use tinyget:
tinyget -srv:uat3.google.com -uri:/usa/offers/LHO -threads:10 -loop:20

Profiling SQL Server 2005 and 2008

I am used to use SQL Server Dashboard for profiling SQL Server 2005. By profiling I mean, something nasty is going on between application and SQL Server and no one has any clue what is the reason. This is why Microsoft wrote a series of useful reports that gather information, or use data stored in Master database, to help you to understand what may be the reason. Unfortunately SQL Server 2008 does not support Dashboard, it was replaced by SQL Server Performance Studio. I installed recently Performance Studio, I was hoping to see something similar to Dashboard, unfortunately Performance Studio is a full blown Warehouse - it gathers plenty of additional counters when application is running. It hugely increase CPU usage. While running it in my Test Bed, I receive plenty of Timeout requests to a DB server, I don't recommend it to be installed on a machine that is facing CPU Utilization or memory problems :) Due to that I had to use good old SQL queries to figure out what is going on. There is a nice article explaining how to do it. To see what DB is hit the most I use
SELECT TOP 10 
        [Total Reads] = SUM(total_logical_reads)
        ,[Execution count] = SUM(qs.execution_count)
        ,DatabaseName = DB_NAME(qt.dbid)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
GROUP BY DB_NAME(qt.dbid)
ORDER BY [Total Reads] DESC;

SELECT TOP 10 
        [Total Writes] = SUM(total_logical_writes)
        ,[Execution count] = SUM(qs.execution_count)
        ,DatabaseName = DB_NAME(qt.dbid)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
GROUP BY DB_NAME(qt.dbid)
ORDER BY [Total Writes] DESC;
Null means temporary table not assigned to any DB. To see the most costly queries
SELECT TOP 10 
 [Average IO] = (total_logical_reads + total_logical_writes) / qs.execution_count
,[Total IO] = (total_logical_reads + total_logical_writes)
,[Execution count] = qs.execution_count
,[Individual Query] = SUBSTRING (qt.text,qs.statement_start_offset/2, 
         (CASE WHEN qs.statement_end_offset = -1 
            THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 
          ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) 
        ,[Parent Query] = qt.text
,DatabaseName = DB_NAME(qt.dbid)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY [Average IO] DESC;
Queries executed most often
SELECT TOP 10 
 [Execution count] = execution_count
,[Individual Query] = SUBSTRING (qt.text,qs.statement_start_offset/2, 
         (CASE WHEN qs.statement_end_offset = -1 
            THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 
          ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)
,[Parent Query] = qt.text
,DatabaseName = DB_NAME(qt.dbid)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY [Execution count] DESC;

Usefull PerfMon counters for Memory Profiling

Counters that I usually add to PerfMon to figure out what is going on in a memory.
Counters that we need attached to w3wp.exe:
    .NET CLR Memory/# Bytes in all Heaps
    .NET CLR Memory/Large Object Heap Size
    .NET CLR Memory/Gen 2 heap size
    .NET CLR Memory/Gen 1 heap size
    .NET CLR Memory/Gen 0 heap size
    Process/Private Bytes
    Process/Virtual Bytes

.NET crash dump debugging

Yet again I was debugging an application using WinDbg recently. Here is a list of more commands, that I found useful. First, when I load WinDbg under Windows 7 i first run:
*SRV*DownstreamStore*http://msdl.microsoft.com/download/symbols*
.sympath srv*
.reload
.loadby sos mscorwks
Then I install in some folder useful extension. Those extensions are useful, especially for GC Collection debugging, and searching for deadlocks. A brief description of commands. And I run:
.load C:\worek\sosex\sosex
With an extension installed I can check for deadlocks
!locks                     -- original WinDbg command
!dlk                       -- really nice extension
After checking for deadlocks I look at a memory. I am interested in how much memory is used. Unknown memory is a virtual memory.
!address -summary
......
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
Free                    713      7fb`d81ce000 (   7.984 Tb)           99.80%
<unknown>        2328        3`fb052000 (  15.922 Gb)  95.78%    0.19%
Heap                    188        0`18782000 ( 391.508 Mb)   2.30%    0.00%
Image                  3046        0`11147000 ( 273.277 Mb)   1.61%    0.00%
Stack                   303        0`03280000 (  50.500 Mb)   0.30%    0.00%
Other                    19        0`001bc000 (   1.734 Mb)   0.01%    0.00%
TEB                     101        0`000ca000 ( 808.000 kb)   0.00%    0.00%
PEB                       1        0`00001000 (   4.000 kb)   0.00%    0.00%
.....
To see what is inside virtual memory one can run:
!address -f:VAR
Then I look at types in memory. I am looking at a number of objects allocated per type, and amount of memory used. String is primitive type that eats huge amount of memory.
!dumpheap -stat
.........
000007fef8eeeb88   109006      4360240 System.Collections.ArrayList
000007fef8ee7590   190421      4570104 System.Object
000007ff009bfa08    24963      4792896 Sitecore.Data.Items.Item
000007ff00a76300   102363      9826848 Sitecore.Caching.Cache+CacheEntry
000007fef8eef5f8   156519     13773672 System.Collections.Hashtable
000007fef8ed5a90   181410     17197296 System.Object[]
000007ff009bc690   573156     27511488 Sitecore.Data.ID
000007fef8eefce0     6083     43988712 System.Byte[]
000007fef8eef7c0   156694     63591264 System.Collections.Hashtable+bucket[]
000007fef8ee7ca0   903961    149734600 System.String
00000000011b5cc0     8161   1172482272      Free
Total 3615363 objects
Fragmented blocks larger than 0.5 MB:
            Addr     Size      Followed by
0000000143871e38    0.7MB 000000014392e858 System.String
0000000143b48248    0.6MB 0000000143bdb670 System.String
000000023f531080    9.1MB 000000023fe4f130 System.Threading.Overlapped
To identify who is holding a type:
!dumpheap -type TTC.TT.Nelson.Services.Data.DepartureData
...
Heap 0
         Address               MT     Size
00000000ff453010 000007ff01591f00       88     
0000000100115190 000007ff015925a8       64     
0000000100116a38 000007ff015925a8       64     
0000000100658c70 000007ff015925a8       64     
0000000100658cb0 000007ff015c6b10       64     
0000000100658d18 000007ff015925a8       64     
0000000100800e88 000007ff01591f00       88     
0000000100800fb0 000007ff01591f00       88     
00000001008010d8 000007ff01591f00       88     
0000000100801200 000007ff01591f00       88     
0000000100801328 000007ff01591f00       88     
....
I go to a specific address and try to see a trace:
!gcroot 00000000ff453010
...
Scan Thread 99 OSTHread 1384
DOMAIN(000000000263FAA0):HANDLE(Pinned):16415c0:Root:00000001ff419258(System.Object[])->
0000000140791438(System.Func`2[[TTC.TT.Nelson.Services.Data.DepartureData, TTC.TT.Nelson],[TTC.TT.Entities.TropicsAvailabilityStatus, TTC.TT.Entities]])
...
To see just managed threads:
!threads
...
                                              PreEmptive                                                Lock
       ID OSID        ThreadOBJ     State   GC     GC Alloc Context                  Domain           Count APT Exception
  10    1  e90 00000000025ec820      8220 Enabled  0000000000000000:0000000000000000 00000000025e7950     0 Ukn
  20    2  560 00000000026003f0      b220 Enabled  0000000000000000:0000000000000000 00000000025e7950     0 MTA (Finalizer)
  21    3  41c 000000000263e700    80a220 Enabled  0000000000000000:0000000000000000 00000000025e7950     0 MTA (Threadpool Completion Port)
  22    4 1270 000000000263f4d0      1220 Enabled  0000000000000000:0000000000000000 00000000025e7950     0 Ukn
  25    8 14a0 0000000004a701f0   200b020 Enabled  0000000000000000:0000000000000000 000000000263faa0     0 MTA
...
To see threads with a stack assigned to a thread:
~*e !CLRStack
...
OS Thread Id: 0x14a0 (25)
Child-SP         RetAddr          Call Site
0000000008fdede0 000007fef8da2bbb Sitecore.IO.FileWatcher.Worker()
0000000008fdee30 000007fef8e3aa7d System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
0000000008fdee80 000007fef9cc0282 System.Threading.ThreadHelper.ThreadStart()
OS Thread Id: 0x3d8 (26)
Child-SP         RetAddr          Call Site
000000000918e950 000007fef8da2bbb Sitecore.IO.FileWatcher.Worker()
000000000918e9a0 000007fef8e3aa7d System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
000000000918e9f0 000007fef9cc0282 System.Threading.ThreadHelper.ThreadStart()
...
To see what objects are waiting to be finalized:
!finalizequeue
...
000007fef8ee88c0       76         3648 System.Globalization.AgileSafeNativeMemoryHandle
000007fef8ed5270       35         4200 System.IO.FileStream
000007fef8ee8740       57         5928 System.Threading.Thread
000007fef5380810      239        13384 System.Web.DirMonCompletion
000007fef8edf1b8      444        14208 System.WeakReference
000007fef52fc188      319        25520 System.Web.UI.WebControls.Style
000007fef8f10548      956        68832 System.Reflection.Emit.DynamicResolver
000007fef5391310    18366       734640 System.Web.HttpResponseUnmanagedBufferElement
To see CPU utilization with info about how many threads/timers are in use:
!threadpool
...
CPU utilization 61%
Worker Thread: Total: 10 Running: 0 Idle: 10 MaxLimit: 400 MinLimit: 4
...
To profile a cache
!dumpheap -type System.Web.Caching.Cache -stat
Heap 3
total 5288 objects
------------------------------
total 22083 objects
Statistics:
              MT    Count    TotalSize Class Name
000007fef53815a0        1           24 System.Web.Caching.CacheKeyComparer
000007fef5381090        1           24 System.Web.Caching.Cache
000007fef5381848        1           32 System.Web.Caching.CacheKey
000007fef53813e0        1           40 System.Web.Caching.CacheMultiple
...
Then I follow the System.Web.Caching.Cache object:
!dumpheap -mt 000007fef5381090
...
Heap 1
         Address               MT     Size
000000013f41d210 000007fef5381090       24 
...
Heap 3
         Address               MT     Size
total 0 objects
------------------------------
total 1 objects
Statistics:
              MT    Count    TotalSize Class Name
000007fef5381090        1           24 System.Web.Caching.Cache
And I dump the object to see the stack:
!do 000000013f41d210
              MT    Field   Offset                 Type VT     Attr            Value Name
000007fef5380ed0  40013aa        8 ...ing.CacheInternal  0 instance 000000013f41d358 _cacheInternal
000007fef8f281e0  40013a8      370      System.DateTime  1   shared           static NoAbsoluteExpiration
                                 >> Domain:Value  00000000025e7950:NotInit  000000000263faa0:000000013f423990 0000000004d49820:NotInit  <<
000007fef8f280e0  40013a9      378      System.TimeSpan  1   shared           static NoSlidingExpiration
                                 >> Domain:Value  00000000025e7950:NotInit  000000000263faa0:000000013f4239a8 0000000004d49820:NotInit  <<
000007fef5380dd8  40013ab      380 ...emRemovedCallback  0   shared           static s_sentinelRemovedCallback
                                 >> Domain:Value  00000000025e7950:NotInit  000000000263faa0:000000013f423a08 0000000004d49820:NotInit  <<
To dump LHO heap I use
!dumpheap -min 85000
...
00000002353fa698 00000000011b5cc0 19578352 Free
00000002366a6488 000007fef8ee7ca0   114496     
00000002366c23c8 00000000011b5cc0 134666512 Free
000000023e72fcd8 000007fef8eefce0   524312     
000000023e7afcf0 00000000011b5cc0  2618528 Free
000000023ea2f190 000007fef8eef7c0   242496     
total 118 objects
------------------------------
total 450 objects
Statistics:
              MT    Count    TotalSize Class Name
000007ff01101028        1        98304 System.Data.Linq.IdentityManager+StandardIdentityManager+IdentityCache`2+Slot[[TTC.TT.Nelson.DataAccess.ZPermanentRedirect, TTC.TT.Nelson],[System.Int32, mscorlib]][]
000007fef8ee9598        1       131096 System.Char[]
000007fef8ed5a90        2       262208 System.Object[]
000007ff01ea0858        1       524280 System.Data.Linq.IdentityManager+StandardIdentityManager+IdentityCache`2+Slot[[TTC.IV.Nelson.DataAccess.ZSitecoreUrl, TTC.IV.Nelson],[System.Guid, mscorlib]][]
000007ff010d6450        1       524280 System.Data.Linq.IdentityManager+StandardIdentityManager+IdentityCache`2+Slot[[TTC.TT.Nelson.DataAccess.ZSitecoreUrl, TTC.TT.Nelson],[System.Guid, mscorlib]][]
000007fef8eef7c0       15      6673440 System.Collections.Hashtable+bucket[]
000007fef8eefce0       22     40672944 System.Byte[]
000007fef8ee7ca0      277     48983200 System.String
00000000011b5cc0      130   1151450896      Free
...
To see what's inside GC collection 2, just be careful it may crash your WinDbg session.
!dumpgen 2

Saturday, September 8, 2012

pylint and pep8

I've spend some time analyzing Django code. Maybe because I return to python with a huge experience in C#, but I find a Django code really messy. Classes, methods are big, many examples of violation of SOLID rules, and so on. I decided to test it against typical code analysis tools for python - pylint and pep8. Below are some results. By the way there is also django-lint - unfortunately not available via pip.
pylint db\models\query.py
...

Messages by category
--------------------

+-----------+-------+---------+-----------+
|type       |number |previous |difference |
+===========+=======+=========+===========+
|convention |85     |NC       |NC         |
+-----------+-------+---------+-----------+
|refactor   |26     |NC       |NC         |
+-----------+-------+---------+-----------+
|warning    |98     |NC       |NC         |
+-----------+-------+---------+-----------+
|error      |22     |NC       |NC         |
+-----------+-------+---------+-----------+

...
Global evaluation
-----------------
Your code has been rated at 6.52/10
pep8 db\models\query.py
db\models\query.py:12:5: E128 continuation line under-indented for visual indent
db\models\query.py:28:1: E302 expected 2 blank lines, found 1
db\models\query.py:53:14: E231 missing whitespace after ','
db\models\query.py:54:29: E231 missing whitespace after ','
db\models\query.py:174:17: E127 continuation line over-indented for visual indent
db\models\query.py:269:80: E501 line too long (83 > 79 characters)
db\models\query.py:290:80: E501 line too long (81 > 79 characters)
db\models\query.py:315:50: E225 missing whitespace around operator
db\models\query.py:328:80: E501 line too long (88 > 79 characters)
db\models\query.py:336:17: E128 continuation line under-indented for visual indent
db\models\query.py:366:21: E128 continuation line under-indented for visual indent
db\models\query.py:367:80: E501 line too long (127 > 79 characters)
db\models\query.py:368:17: E128 continuation line under-indented for visual indent
db\models\query.py:408:80: E501 line too long (90 > 79 characters)
db\models\query.py:409:17: E125 continuation line does not distinguish itself from next logical line
db\models\query.py:410:80: E501 line too long (84 > 79 characters)
db\models\query.py:412:80: E501 line too long (87 > 79 characters)
db\models\query.py:414:80: E501 line too long (96 > 79 characters)
db\models\query.py:416:80: E501 line too long (144 > 79 characters)
db\models\query.py:434:17: E127 continuation line over-indented for visual indent
db\models\query.py:445:80: E501 line too long (83 > 79 characters)
db\models\query.py:467:80: E501 line too long (113 > 79 characters)
db\models\query.py:469:17: E127 continuation line over-indented for visual indent
db\models\query.py:482:17: E127 continuation line over-indented for visual indent
db\models\query.py:495:17: E127 continuation line over-indented for visual indent
db\models\query.py:523:17: E127 continuation line over-indented for visual indent
db\models\query.py:553:17: E127 continuation line over-indented for visual indent
db\models\query.py:567:80: E501 line too long (84 > 79 characters)
db\models\query.py:581:21: E128 continuation line under-indented for visual indent
db\models\query.py:583:80: E501 line too long (103 > 79 characters)
db\models\query.py:585:17: E128 continuation line under-indented for visual indent
db\models\query.py:593:17: E127 continuation line over-indented for visual indent
db\models\query.py:595:17: E127 continuation line over-indented for visual indent
db\models\query.py:597:17: E128 continuation line under-indented for visual indent
db\models\query.py:633:21: E127 continuation line over-indented for visual indent
db\models\query.py:680:80: E501 line too long (80 > 79 characters)
db\models\query.py:681:21: E128 continuation line under-indented for visual indent
db\models\query.py:685:80: E501 line too long (90 > 79 characters)
db\models\query.py:699:80: E501 line too long (80 > 79 characters)
db\models\query.py:724:80: E501 line too long (80 > 79 characters)
db\models\query.py:734:80: E501 line too long (81 > 79 characters)
db\models\query.py:735:21: E128 continuation line under-indented for visual indent
db\models\query.py:744:17: E128 continuation line under-indented for visual indent
db\models\query.py:753:17: E127 continuation line over-indented for visual indent
db\models\query.py:764:17: E127 continuation line over-indented for visual indent
db\models\query.py:775:17: E127 continuation line over-indented for visual indent
db\models\query.py:777:80: E501 line too long (85 > 79 characters)
db\models\query.py:903:80: E501 line too long (85 > 79 characters)
db\models\query.py:920:80: E501 line too long (80 > 79 characters)
db\models\query.py:1013:80: E501 line too long (93 > 79 characters)
db\models\query.py:1014:21: E128 continuation line under-indented for visual indent
db\models\query.py:1018:80: E501 line too long (85 > 79 characters)
db\models\query.py:1039:21: E128 continuation line under-indented for visual indent
db\models\query.py:1043:80: E501 line too long (80 > 79 characters)
db\models\query.py:1054:21: E128 continuation line under-indented for visual indent
db\models\query.py:1079:80: E501 line too long (102 > 79 characters)
db\models\query.py:1201:17: E127 continuation line over-indented for visual indent
db\models\query.py:1240:1: E302 expected 2 blank lines, found 1
db\models\query.py:1245:80: E501 line too long (80 > 79 characters)
db\models\query.py:1303:80: E501 line too long (83 > 79 characters)
db\models\query.py:1304:80: E501 line too long (89 > 79 characters)
db\models\query.py:1305:80: E501 line too long (82 > 79 characters)
db\models\query.py:1307:80: E501 line too long (83 > 79 characters)
db\models\query.py:1323:80: E501 line too long (93 > 79 characters)
db\models\query.py:1323:91: E225 missing whitespace around operator
db\models\query.py:1330:80: E501 line too long (103 > 79 characters)
db\models\query.py:1332:80: E501 line too long (96 > 79 characters)
db\models\query.py:1333:80: E501 line too long (97 > 79 characters)
db\models\query.py:1332:94: E225 missing whitespace around operator
db\models\query.py:1336:80: E501 line too long (82 > 79 characters)
db\models\query.py:1359:80: E501 line too long (88 > 79 characters)
db\models\query.py:1361:29: E203 whitespace before ':'
db\models\query.py:1418:80: E501 line too long (82 > 79 characters)
db\models\query.py:1420:80: E501 line too long (92 > 79 characters)
db\models\query.py:1425:80: E501 line too long (85 > 79 characters)
db\models\query.py:1426:80: E501 line too long (88 > 79 characters)
db\models\query.py:1439:9: E128 continuation line under-indented for visual indent
db\models\query.py:1439:9: E125 continuation line does not distinguish itself from next logical line
db\models\query.py:1443:80: E501 line too long (87 > 79 characters)
db\models\query.py:1449:80: E501 line too long (80 > 79 characters)
db\models\query.py:1485:80: E501 line too long (81 > 79 characters)
db\models\query.py:1489:80: E501 line too long (82 > 79 characters)
db\models\query.py:1527:80: E501 line too long (84 > 79 characters)
db\models\query.py:1530:17: E128 continuation line under-indented for visual indent
db\models\query.py:1531:17: E128 continuation line under-indented for visual indent
db\models\query.py:1532:17: E128 continuation line under-indented for visual indent
db\models\query.py:1589:15: E261 at least two spaces before inline comment
db\models\query.py:1596:25: E261 at least two spaces before inline comment
db\models\query.py:1599:22: E261 at least two spaces before inline comment
db\models\query.py:1600:33: E261 at least two spaces before inline comment
db\models\query.py:1639:80: E501 line too long (83 > 79 characters)
db\models\query.py:1640:80: E501 line too long (83 > 79 characters)
db\models\query.py:1642:80: E501 line too long (92 > 79 characters)
db\models\query.py:1645:80: E501 line too long (89 > 79 characters)
db\models\query.py:1647:80: E501 line too long (82 > 79 characters)
db\models\query.py:1653:80: E501 line too long (81 > 79 characters)
db\models\query.py:1654:80: E501 line too long (81 > 79 characters)
db\models\query.py:1659:63: E225 missing whitespace around operator
db\models\query.py:1663:80: E501 line too long (93 > 79 characters)
db\models\query.py:1722:80: E501 line too long (80 > 79 characters)
db\models\query.py:1745:80: E501 line too long (83 > 79 characters)
db\models\query.py:1757:80: E501 line too long (81 > 79 characters)
db\models\query.py:1788:80: E501 line too long (82 > 79 characters)
Not so bad, after all we are investigating a file that has almost 2000 lines (many comments). Analysis of different classes in average gives more or less same results. The bigger the class the worst results.

Tuesday, July 31, 2012

Python and a GIL

I run recently into a very nice article that describes in an excellent way of what a GIL is and what it is not. So if you, as me run in to many GIL battles in your life, it is a good read. One thing to add, traditionally threads were introduced to do a parallel IO not a parallel computation, keep that in mind.

Friday, July 27, 2012

UTF8 support under Windows in SublimeText2

SublimeText2 has a great problem with guessing encoding under Windows system. It tries to use CP1252 instead of CP1250 (for my locals:)) by default. I strongly suggest installing a package manager for SublimeText2. And then installing a encoding helper plugin. I also recommend (If you are a Central-European nation) changing a fallback_encoding property in: Preferences->Settings- Default to a Central European (Windows 1250).
A list of possible values:
st_encodings_list = [
   "UTF-8",
   "UTF-8 with BOM",
   "UTF-16 LE",
   "UTF-16 LE with BOM",
   "UTF-16 BE",
   "UTF-16 BE with BOM",
   "Western (Windows 1252)",
   "Western (ISO 8859-1)",
   "Western (ISO 8859-3)",
   "Western (ISO 8859-15)",
   "Western (Mac Roman)",
   "DOS (CP 437)",
   "Arabic (Windows 1256)",
   "Arabic (ISO 8859-6)",
   "Baltic (Windows 1257)",
   "Baltic (ISO 8859-4)",
   "Celtic (ISO 8859-14)",
   "Central European (Windows 1250)",
   "Central European (ISO 8859-2)",
   "Cyrillic (Windows 1251)",
   "Cyrillic (Windows 866)",
   "Cyrillic (ISO 8859-5)",
   "Cyrillic (KOI8-R)",
   "Cyrillic (KOI8-U)",
   "Estonian (ISO 8859-13)",
   "Greek (Windows 1253)",
   "Greek (ISO 8859-7)",
   "Hebrew (Windows 1255)",
   "Hebrew (ISO 8859-8)",
   "Nordic (ISO 8859-10)",
   "Romanian (ISO 8859-16)",
   "Turkish (Windows 1254)",
   "Turkish (ISO 8859-9)",
   "Vietnamese (Windows 1258)",
   "Hexadecimal"
]

Wednesday, July 25, 2012

A good approximation of pi

Ah, yes, a really nice article about a Pi approximation.

Monday, July 16, 2012

PowerShell sending requests

From time to time, I have to generate a request to some web service for each element from a database. From a db I tend to generate a PowerShell commands to send a request:
$page = (New-Object System.Net.WebClient).DownloadString("http://localhost/")
And if one needs to know what was returned
Write-Host "$page"

Friday, July 13, 2012

Using PowerShell

I still keep using Unix tools underneath my GNU Emacs on windows, tools, like grep, sort, find. But, they need a configuration, and on Windows, they face specific problems :) On SO, there is a nice list of PoSh cmdlets that are an excellent replacement for my typical tasks.

Tuesday, July 10, 2012

The next JavaScript issue

This site becomes my diary of JavaScript problems that I fall into, or am aware of. The newest one is the infinity. In other words what is a result of:
parseInt(1 / 0, 19)
And why it is 18.

Monday, July 2, 2012

What tool use to write T4 templates

Each time, I work for a different company, I use a different tool, so far I believe the best is tangible t4, without modeling tools.

Wednesday, May 30, 2012

WinDbg under .net 4+

I had a problem in loading mscorwks.dll under WinDbg, I found that in framework 4+, I should use command:
.loadby sos clr

Monday, April 16, 2012

Instead of GNU Emacs in Windows

Recently I've been using Sublime, mainly because of it's text coding support - I believe better then GNU Emacs in Windows environment. And for sure match better then Notepad++. Key points are: TextMate format support - finally, and python plug-in system, so configuration and plug-in creation is much better then in VisualStudio, but compared to GNU Emacs, it's not as good.

MVC PartialViews, Templates and JavaScript issue

In other words, a common problem is to place just one call to a libriary, or a method. Here is a nice solution.

Monday, April 2, 2012

Vacuous truth

Recently in my dev team there was a problem with LINQ operators on empty collections. As I found out programmers generally are not aware of vacuous truth. Basic math, but still many coders were not exposed to this kind of problem, ech...

Thursday, March 1, 2012

Wavelet trees an interesting data structure

Recently, I am refreshing my knowledge about data structures and I am trying to find some nice ideas that I am not aware of. Here is a really cool data structure that in theory should behave faster then an array.

Tuesday, February 28, 2012

Javascript problems

Watching videos from a last CodeMash (at this point of time, they are hard to find, as for a CodeMash 2012 event). But check this out. An excellent presentation of an interesting behavior of a JavaScript language.

Thursday, February 2, 2012

Major unsolved problems in CS

There is a nice discussion going on on a stack exchange about major unsolved cs problems, so if you are a math fun (as me) it is something to solve on your free time.

Tuesday, January 24, 2012

HTML5 support in VS2010

I wander, why it is not supported from the beginning.

Thursday, January 5, 2012

What to do when a package information is too big in Nuget

Use Out-GridView:
get-package -listavailable -filter fluent | Out-GridView