Sunday, July 31, 2011
Sunday, July 10, 2011
2011 AVG Anti-Virus Free
Endless mantra shouted by Security Suite sounds like a lot of sellers, such as "scans faster, easier to use, better performance," and AVG has released a new version is said to take all three. Of course, the scans are faster, faster to install, and some changes in the user interface easy to use. However, changes to the engine, the power to detect and remove threats has made a difficult decision to get up to independent laboratories to restore the efficacy results in later this year.
AVG Free has some new protective properties of this year. The software offers what he calls "Smart Scan" which uses behavioral detection network AVG to scan for known file safe once, and only if the scan detects changes. Like its competitors, AVG network consists of its Anonymous user base provide data to the cloud. You can choose to opt to contribute your data when you install or setup menu. AVG says choose not to have a negative impact on safety.
AVG Free has some new protective properties of this year. The software offers what he calls "Smart Scan" which uses behavioral detection network AVG to scan for known file safe once, and only if the scan detects changes. Like its competitors, AVG network consists of its Anonymous user base provide data to the cloud. You can choose to opt to contribute your data when you install or setup menu. AVG says choose not to have a negative impact on safety.
AVG Free has some new protective properties of this year. The software offers what he calls "Smart Scan" which uses behavioral detection network AVG to scan for known file safe once, and only if the scan detects changes. Like its competitors, AVG network consists of its Anonymous user base provide data to the cloud. You can choose to opt to contribute your data when you install or setup menu. AVG says choose not to have a negative impact on safety.
AVG Free has some new protective properties of this year. The software offers what he calls "Smart Scan" which uses behavioral detection network AVG to scan for known file safe once, and only if the scan detects changes. Like its competitors, AVG network consists of its Anonymous user base provide data to the cloud. You can choose to opt to contribute your data when you install or setup menu. AVG says choose not to have a negative impact on safety.
Monday, October 18, 2010
More is too more
In 1946 only one programmable electronic computer existed in the entire world. A few years later dozens existed; by the 1960s, hundreds. These computers were still so fearsomely expensive that developers worked hard to minimize the resources their programs consumed.
Though the microprocessor caused the price of compute cycles to plummet, individual processors still cost many dollars. By the 1990s, companies such as Microchip and Zilog were already selling complete microcontrollers for sub-dollar prices. For the first time most embedded applications could cost-effectively exploit partitioning into multiple CPUs. However, few developers actually do this; the non-linear schedule/LoC curve is nearly unknown in embedded circles.
Today's cheap transistor-rich ASICs coupled with tiny 32-bit processors provided as "soft" IP means designers can partition their programs over multiple—indeed many—processors to dramatically accelerate development schedules. Faster product delivery means lower engineering costs. When NRE is properly amortized over production quantities, lower engineering costs translate into lower cost of goods. The customer gets that cool new electronic goody faster and cheaper.
Or, you can continue to build huge monolithic programs running on a single processor, turning a win-win situation into one where everyone loses.esp
Though the microprocessor caused the price of compute cycles to plummet, individual processors still cost many dollars. By the 1990s, companies such as Microchip and Zilog were already selling complete microcontrollers for sub-dollar prices. For the first time most embedded applications could cost-effectively exploit partitioning into multiple CPUs. However, few developers actually do this; the non-linear schedule/LoC curve is nearly unknown in embedded circles.
Today's cheap transistor-rich ASICs coupled with tiny 32-bit processors provided as "soft" IP means designers can partition their programs over multiple—indeed many—processors to dramatically accelerate development schedules. Faster product delivery means lower engineering costs. When NRE is properly amortized over production quantities, lower engineering costs translate into lower cost of goods. The customer gets that cool new electronic goody faster and cheaper.
Or, you can continue to build huge monolithic programs running on a single processor, turning a win-win situation into one where everyone loses.esp
Some Other benefits
Smaller systems contain fewer bugs, of course, but they also tend to have a much lower defect rate. A recent study by Chu, Yang, Chelf, and Hallem evaluated Linux version 2.4.9 In this released and presumably debugged code, an automatic static code checker identified many hundreds of mistakes. Error rates for big functions were two to six times higher than for smaller routines. Partition to accelerate the schedule, and ship a higher quality product.
NIST (the National Institute of Standards and Technology) found that poor testing accounts for some $22 billion in software failures each year.10 Testing is hard; as programs grow the number of execution paths explodes. Robert Glass estimates that for each 25% increase in program size, the program complexity—represented by paths created by function calls, decision statements and the like—doubles.11
Standard software-testing techniques simply don't work. Most studies find that conventional debugging and QA evaluations find only half the program bugs.
Partitioning the system into smaller chunks, distributed over multiple CPUs, with each having a single highly cohesive function, reduces the number of control paths and makes the system inherently more testable.
In no other industry can a company ship a poorly-tested product, often with known defects, without being sued. Eventually the lawyers will pick up the scent of fresh meat in the firmware world.
Partition to accelerate the schedule, ship a higher quality product and one that's been properly tested. Reduce the risk of litigation.
Adding processors increases system performance, not surprisingly simultaneously reducing development time. A rule of thumb states that a system loaded to 90% processor capability doubles development time (versus one loaded at 70% or less).12 At 95% processor loading, expect the project schedule to triple. When there's only a handful of bytes left over, adding even the most trivial new feature can take weeks as the developers rewrite massive sections of code to free up a bit of ROM. When CPU cycles are in short supply an analogous situation occurs.
NIST (the National Institute of Standards and Technology) found that poor testing accounts for some $22 billion in software failures each year.10 Testing is hard; as programs grow the number of execution paths explodes. Robert Glass estimates that for each 25% increase in program size, the program complexity—represented by paths created by function calls, decision statements and the like—doubles.11
Standard software-testing techniques simply don't work. Most studies find that conventional debugging and QA evaluations find only half the program bugs.
Partitioning the system into smaller chunks, distributed over multiple CPUs, with each having a single highly cohesive function, reduces the number of control paths and makes the system inherently more testable.
In no other industry can a company ship a poorly-tested product, often with known defects, without being sued. Eventually the lawyers will pick up the scent of fresh meat in the firmware world.
Partition to accelerate the schedule, ship a higher quality product and one that's been properly tested. Reduce the risk of litigation.
Adding processors increases system performance, not surprisingly simultaneously reducing development time. A rule of thumb states that a system loaded to 90% processor capability doubles development time (versus one loaded at 70% or less).12 At 95% processor loading, expect the project schedule to triple. When there's only a handful of bytes left over, adding even the most trivial new feature can take weeks as the developers rewrite massive sections of code to free up a bit of ROM. When CPU cycles are in short supply an analogous situation occurs.
The superprogrammer effect
Developers come in all sorts of flavors, from somewhat competent plodders to miracle workers who effortlessly create beautiful code in minutes. Management's challenge is to recognize the superprogrammers and use them efficiently. Few bosses do; the best programmers get lumped with relative dullards to the detriment of the entire team.
Big projects wear down the superstars. Their glazed eyes reflect the meeting and paperwork burden; their creativity is thwarted by endless discussions and memos.
The moral is clear and critically important: wise managers put their very best people on the small sections partitioned off of the huge project. Divide your system over many CPUs and let the superprogrammers attack the smallest chunks.
Though most developers view themselves as superprogrammers, competency follows a bell curve. Ignore self-assessments. In "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments," Justin Kruger and David Dunning showed that though the top half of performers were pretty accurate in evaluating their own performance, the bottom half are wildly optimistic when rating themselves.
Big projects wear down the superstars. Their glazed eyes reflect the meeting and paperwork burden; their creativity is thwarted by endless discussions and memos.
The moral is clear and critically important: wise managers put their very best people on the small sections partitioned off of the huge project. Divide your system over many CPUs and let the superprogrammers attack the smallest chunks.
Though most developers view themselves as superprogrammers, competency follows a bell curve. Ignore self-assessments. In "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments," Justin Kruger and David Dunning showed that though the top half of performers were pretty accurate in evaluating their own performance, the bottom half are wildly optimistic when rating themselves.
Reduce NRE, save big bucks
Hardware designers will shriek when you propose adding processors just to accelerate the software schedule. Though they know transistors have little or no cost, the EE zeitgeist is to always minimize the bill of materials. Yet since the dawn of the microprocessor age, it has been routine to add parts just to simplify the code. No one today would consider building a software UART, though it's quite easy to do and wasn't terribly uncommon decades ago. Implement asynchronous serial I/O in code and the structure of the entire program revolves around the software UART's peculiar timing requirements. Consequently, the code becomes a nightmare. So today we add a hardware UART without complaint. The same can be said about timers, pulse-width modulators, and more. The hardware and software interact as a synergistic whole orchestrated by smart designers who optimize both product and engineering costs.
Sum the hardware component prices, add labor and overhead, and you still haven't properly accounted for the product's cost. Non-recurring engineering (NRE) is just as important as the price of the printed circuit board. Detroit understands this. It can cost more than $2 billion to design and build the tooling for a new car. Sell one million units and the consumer must pay $2,000 above the component costs to amortize those NRE bills.
Similarly, when we in the embedded world save NRE dollars by delivering products faster, we reduce the system's recurring cost. Everyone wins.
Sure, there's a cost to adding processors, especially when doing so means populating more chips on the printed circuit board. But transistors are particularly cheap inside of an ASIC. A full 32-bit CPU can consume as little as 20,000 to 30,000 gates. Interestingly, users of configurable processor cores average about six 32-bit processors per ASIC, with at least one customer's chip using more than 180 processors! So if time to market is really important to your company (and when isn't it?), if the code naturally partitions well, and if CPUs are truly cheap, what happens when you break all the rules and add lots of processors? Using the COCOMO numbers a one million LoC program divided over 100 CPUs can be completed five times faster than using the traditional monolithic approach, at about one-fifth the cost.
Extreme partitioning is the silver bullet to solving the software productivity crisis.
Intel's pursuit of massive performance gains has given the industry a notion that processors are big, ungainly, transistor-consuming, heat-dissipating chips. That's simply not true; the idea only holds water in the strange byways of the desktop world. An embedded 32-bitter is only about an order of magnitude larger than the 8080, the first really useful processor introduced in 1974.
Obviously, not all projects partition as cleanly as those described here. Some do better, when handling lots of fast data in a uniform manner. The same code might live on many CPUs. Others aren't so orthogonal. But only a very few systems fail to benefit from clever partitioning.
Sum the hardware component prices, add labor and overhead, and you still haven't properly accounted for the product's cost. Non-recurring engineering (NRE) is just as important as the price of the printed circuit board. Detroit understands this. It can cost more than $2 billion to design and build the tooling for a new car. Sell one million units and the consumer must pay $2,000 above the component costs to amortize those NRE bills.
Similarly, when we in the embedded world save NRE dollars by delivering products faster, we reduce the system's recurring cost. Everyone wins.
Sure, there's a cost to adding processors, especially when doing so means populating more chips on the printed circuit board. But transistors are particularly cheap inside of an ASIC. A full 32-bit CPU can consume as little as 20,000 to 30,000 gates. Interestingly, users of configurable processor cores average about six 32-bit processors per ASIC, with at least one customer's chip using more than 180 processors! So if time to market is really important to your company (and when isn't it?), if the code naturally partitions well, and if CPUs are truly cheap, what happens when you break all the rules and add lots of processors? Using the COCOMO numbers a one million LoC program divided over 100 CPUs can be completed five times faster than using the traditional monolithic approach, at about one-fifth the cost.
Extreme partitioning is the silver bullet to solving the software productivity crisis.
Intel's pursuit of massive performance gains has given the industry a notion that processors are big, ungainly, transistor-consuming, heat-dissipating chips. That's simply not true; the idea only holds water in the strange byways of the desktop world. An embedded 32-bitter is only about an order of magnitude larger than the 8080, the first really useful processor introduced in 1974.
Obviously, not all projects partition as cleanly as those described here. Some do better, when handling lots of fast data in a uniform manner. The same code might live on many CPUs. Others aren't so orthogonal. But only a very few systems fail to benefit from clever partitioning.
Cheating the schedule's exponential growth
The upper curve in Figure 3 is the COCOMO schedule; the lower one assumes we're building systems at the 20-KLoC/program productivity level. The schedule, and hence costs, grow linearly with increasing program size.
But how can we operate at these constant levels of productivity when programs exceed 20 KLoC? The answer: by partitioning! And by understanding the implications of Brook's Law and DeMarco and Lister's study. The data is stark: if we don't compartmentalize the project, divide it into small chunks that can be created by tiny teams working more or less in isolation, schedules will balloon exponentially.
Professionals look for ways to maximize their effectiveness. As professional software engineers we have a responsibility to find new partitioning schemes. Currently 70 to 80% of a product's development cost is consumed by the creation of code and that ratio is only likely to increase because firmware size doubles about every 10 months. Software will eventually strangle innovation without clever partitioning.
Academics drone endlessly about top-down decomposition (TDD), a tool long used for partitioning programs. Split your huge program into many independent modules; divide these modules into functions, and only then start cranking code.
TDD is the biggest scam ever perpetrated on the software community.
Though TDD is an important strategy that allows wise developers to divide code into manageable pieces, it's not the only tool we available for partitioning programs. TDD is merely one arrow in the quiver, never used to the exclusion of all else. We in the embedded systems industry have some tricks unavailable to IT programmers.
One way to keep productivity levels high—at, say, the 20-KLoC/program number—is to break the program up into a number of discrete entities, each no more than 20KLoC long. Encapsulate each piece of the program into its own processor.
That's right: add CPUs merely for the sake of accelerating the schedule and reducing engineering costs.
Sadly, most 21st-century embedded systems look an awful lot like mainframe computers of yore. A single CPU manages a disparate array of sensors, switches, communications links, PWMs, and more. Dozens of tasks handle many sorts of mostly unrelated activities. A hundred thousand lines of code all linked into a single executable enslaves dozens of programmers all making changes throughout a Byzantine structure no one completely comprehends. Of course development slows to a crawl.
Transistors are cheap. Developers are expensive.
Break the system into small parts, allocate one partition per CPU, and then use a small team to build each subsystem. Minimize interactions and communications between components and between the engineers.
Suppose the monolithic, single-CPU version of the product requires 100K lines of code. The COCOMO calculation gives a 1,403 man-month development schedule.
Segment the same project into four processors, assuming one has 50KLoC and the others 20KLoC each. Communication overhead requires a bit more code so we've added 10% to the 100-KLoC base figure. The schedule collapses to 909 man-months, or 65% of that required by the monolithic version.
Maybe the problem is quite orthogonal and divides neatly into many small chunks, none being particularly large. Five processors running 22 KLoC each will take 1,030 man-months, or 73% of the original, not-so-clever design.
Transistors are cheap—so why not get crazy and toss in lots of processors? One processor runs 20 KLoC and the other nine each run 10-KLoC programs. The resulting 724 man-month schedule is just half of the single-CPU approach. The product reaches consumers' hands twice as fast and development costs tumble. You're promoted and get one of those hot foreign company cars plus a slew of appreciating stock options. Being an engineer was never so good.
But how can we operate at these constant levels of productivity when programs exceed 20 KLoC? The answer: by partitioning! And by understanding the implications of Brook's Law and DeMarco and Lister's study. The data is stark: if we don't compartmentalize the project, divide it into small chunks that can be created by tiny teams working more or less in isolation, schedules will balloon exponentially.
Professionals look for ways to maximize their effectiveness. As professional software engineers we have a responsibility to find new partitioning schemes. Currently 70 to 80% of a product's development cost is consumed by the creation of code and that ratio is only likely to increase because firmware size doubles about every 10 months. Software will eventually strangle innovation without clever partitioning.
Academics drone endlessly about top-down decomposition (TDD), a tool long used for partitioning programs. Split your huge program into many independent modules; divide these modules into functions, and only then start cranking code.
TDD is the biggest scam ever perpetrated on the software community.
Though TDD is an important strategy that allows wise developers to divide code into manageable pieces, it's not the only tool we available for partitioning programs. TDD is merely one arrow in the quiver, never used to the exclusion of all else. We in the embedded systems industry have some tricks unavailable to IT programmers.
One way to keep productivity levels high—at, say, the 20-KLoC/program number—is to break the program up into a number of discrete entities, each no more than 20KLoC long. Encapsulate each piece of the program into its own processor.
That's right: add CPUs merely for the sake of accelerating the schedule and reducing engineering costs.
Sadly, most 21st-century embedded systems look an awful lot like mainframe computers of yore. A single CPU manages a disparate array of sensors, switches, communications links, PWMs, and more. Dozens of tasks handle many sorts of mostly unrelated activities. A hundred thousand lines of code all linked into a single executable enslaves dozens of programmers all making changes throughout a Byzantine structure no one completely comprehends. Of course development slows to a crawl.
Transistors are cheap. Developers are expensive.
Break the system into small parts, allocate one partition per CPU, and then use a small team to build each subsystem. Minimize interactions and communications between components and between the engineers.
Suppose the monolithic, single-CPU version of the product requires 100K lines of code. The COCOMO calculation gives a 1,403 man-month development schedule.
Segment the same project into four processors, assuming one has 50KLoC and the others 20KLoC each. Communication overhead requires a bit more code so we've added 10% to the 100-KLoC base figure. The schedule collapses to 909 man-months, or 65% of that required by the monolithic version.
Maybe the problem is quite orthogonal and divides neatly into many small chunks, none being particularly large. Five processors running 22 KLoC each will take 1,030 man-months, or 73% of the original, not-so-clever design.
Transistors are cheap—so why not get crazy and toss in lots of processors? One processor runs 20 KLoC and the other nine each run 10-KLoC programs. The resulting 724 man-month schedule is just half of the single-CPU approach. The product reaches consumers' hands twice as fast and development costs tumble. You're promoted and get one of those hot foreign company cars plus a slew of appreciating stock options. Being an engineer was never so good.


11:55 AM
Afraz Mehmood