Friday, October 18, 2019

HelioSeal Technology

Introduction - Helium Hard Drive(Disk)

 Data is the ‘life-blood’ of an organization and the key asset that they want to keep around for years. It must be easily accessible, secured, and reliable within an enterprise so it can be processed for real-time use, or analyzed in the future to extract further value or intelligence. It is for these reasons, and others (such as legal, regulatory, due diligence, etc.), why organizations invest in their data. The information extracted can lead to better business decisions, improved organizational processes, advanced technologies, and maximized returns.



As Western Digital develops the head assemblies for its leading helium HDD portfolio, the use of the Damascene process, coupled with energy-assisted magnetic recording technology, will enable the company to deliver even higher HDD capacities in the future, setting standards for others to follow.

Western Digital not only defines the future of hard drives, but has created innovative solutions by making prudent technology choices, investing in multiple parallel technologies, and delivering products to market with exceptional quality and reliability. This disciplined approach to solving industry problems, such as capacity expansion, will evolve the hard drive market for the next decade, and beyond.

Helium-Sealed Technology

The data storage industry is undergoing a major shift in which new market segments and storage tiers are emerging, creating high demand for scalable, mass storage at cost-effective prices. The Capacity Enterprise Hard Drive segment is currently experiencing a 40% compound annual growth rate (CAGR) in petabytes stored.

HelioSeal-based HDDs
Helium-sealed drives are part of this segment and represent one of the most significant storage technology advancements in decades. Western Digital’s heritage in HDD development pioneered heliumsealed drives with the introduction of HelioSeal® technology in 2013 (under the HGST brand).

HelioSeal is the foundational building block for Western Digital’s helium-sealed drive portfolio, with four successful generations of  drives delivered to date, representing a solid and stable field-proven
design. Over 20 million HelioSeal-based HDDs have been shipped to date worldwide





In quick review of the HDD core competencies driven by Western Digital, the company pioneered helium-sealed HDD technology – was the first to deliver multi-stage micro actuation for better head positioning in a data center hard drive – and produced several hard drive innovations using the Damascene head-making process to obtain finer control of the head shape and dimensions when writing to a small, narrow track on disk.
 The decision to improve current core HDD technologies, coupled with the successful development and execution of these technologies, position Western Digital as the leader in hard drives today, and the company that will deliver increased hard drive capacities in the future



Highlights


  • Combines HelioSeal and host-managed SMR to deliver more cost-effective capacity than conventional magnetic recording (CMR) drives.
  • Purpose-built for “sequential write” applications and workloads
  • Consistent, predictable performance with uncompromising enterprise-class quality and reliability.
  • 2.5M hour MTBF rating.

Applications and Workload


  • Big Data or Bulk Storage
  • Cloud Storage
  • Social Media
  • Content Libraries, Streaming Media and Digital Media Assets
  • Online Back-up and Replication
  • Compliance, Audits, and Regulatory Records



Reference For more detail :

https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/collateral/brochure/brochure-helioseal-technology.pdf

https://www.westerndigital.com/products/data-center-drives/ultrastar-dc-hc600-series-hdd

https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/collateral/tech-brief/tech-brief-hdd-capacity.pdf?_ga=2.186543401.146656319.1571065102-634084427.1571065102

https://www.extremetech.com/computing/179972-seagate-unveils-worlds-fastest-6tb-hard-drive-and-it-isnt-filled-with-helium








Sunday, March 17, 2019

Image Guided Therapy (IGT)

Image Guided Therapy (IGT)


Abstract on Image-Guided Therapy (IGT)

System development for image-guided therapy (IGT), or image-guided interventions (IGI), continues to be an area of active interest across academic and industry groups. This is an emerging field that is growing rapidly: major academic institutions and medical device manufacturers have produced IGT technologies that are in routine clinical use, dozens of high-impact publications are published in well regarded journals each year, and several small companies have successfully commercialized sophisticated IGT systems. In meetings between IGT investigators over the last two years, a consensus has emerged that several key areas must be addressed collaboratively by the community to reach the next level of impact and efficiency in IGT research and development to improve patient care

The History

The BWH began its IGT program in 1991. Since then, it has become an internationally recognized pioneer in real-time intraoperative MRI-guided therapy. Using the well-known “double-doughnut” system, BWH teams performed over 3,000 surgical and interventional procedures. By 1994 the BWH IGT Program introduced non-invasive MRI-guided focused ultrasound surgery. Opening in 2011, AMIGO continues these pioneering efforts with multimodal image guidance.

Dr. Jolesz began collaborating with a team of engineers from GE Healthcare in 1988 to build the first MRI scanner for use during surgical procedures. The system had two magnets on each side of a patient table, giving surgeons access to the patient who remained situated in the MRI scanner.

Dr. Jolesz and the Brigham and Women’s IGT team soon followed the successful development of intraoperative MRI with another landmark achievement. In 2004, the Food and Drug Administration approved the first image-guided procedure: MRI-guided focused ultrasound (MRgFUS) treatment of uterine fibroids. Developed by BWH’s IGT team, this non-invasive interventional procedure uses MRI to monitor and control high intensity ultrasound waves that are beamed onto a fibroid and destroy it with heat. Specialists have since used the technique to treat breast and brain tumors and relieve pain from bone metastasis.

BWH’s image-guided therapy program continued to lead the advancement of the field into the 21st century, accumulating vast knowledge on best practices in designing and implementing image-guidance systems, establishing clinical programs, and designing IGT research studies. This cumulative body of work drew the attention of the National Institutes of Health, which selected Brigham and Women’s Hospital to become the National Center for Image-Guided Therapy (NCIGT) in 2005.

What Is Image-guided Therapy?

Image-guided therapy, a central concept of 21st century medicine, is the use of any form of medical imaging to plan, perform, and evaluate surgical procedures and therapeutic interventions. Image-guided therapy techniques help to make surgeries less invasive and more precise, which can lead to shorter hospital stays and fewer repeated procedures.

While the number of specific procedures that use image-guidance is growing, these procedures comprise two general categories: traditional surgeries that become more precise through the use of imaging and newer procedures that use imaging and special instruments to treat conditions of internal organs and tissues without a surgical incision.

The cross-sectional digital imaging modalities Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are the most commonly used modalities of image-guided therapy. These procedures are also supported by ultrasound, angiography, surgical navigation equipment, tracking tools, and integration software.


Content Sources:
http://europepmc.org/abstract/med/17644360
https://ncigt.org/igthistory
https://www.brighamandwomens.org/research/amigo/image-guided-therapy-at-bwh
https://web.stanford.edu/~allisono/icra2016tutorial/ICRA2016TutorialHata.pdf

Tuesday, March 12, 2019

Search engine optimisation

Abstract:



Search Engine Optimization, widely known in the web world as SEO. Search engine optimization is the process of making ones website content more search engine friendly to attract traffic by designing site using specific keywords at most important places for search engine. There are many websites having quality content and very attractive to the visitors but not able to get top positions in search engine result pages. This can be due to not using keywords to important places so search engines do not consider these websites important to the visitors. There are lots of things that influence ranking of a website in the search engine. So in order to make your website appear on search engine result pages you need to learn SEO.
Today there are very few major search engines which consider Meta keywords as important; Yahoo is one of them although importance is minimal. No one can tell you the exact algorithm for any search engine its there trade secret. But there are little area which all the search engine finds important. Placing keywords in this area can give any website higher Page ranking in Search engine result pages. So here are some SEO tips that can help you in optimization.

Search engine optimization (SEO) is a set of rules and techniques aimed at improving the SERP rank of the web pages of a website. SEO is a part of Internet Marketing. About 60% of the searches done in the search engine are unique. And now-a-days the usage of search engines has increased a lot. So we optimize our web pages in the search engine.


The outline of the SEO TUTORIAL is

  1. Keyword Research & competition Analysis
  2. On-Page Factors
  3. Off-Page Factors

 

KEYWORD RESEARCH:


This is the first step in SEO. In Keyword Research we choose the best keywords for our web pages. This is the most important part of SEO. We will choose the keywords which has good traffic volume. And we make sure that the keyword is related to our webpage.



For example:

our website is related to online movies.

Then our keywords will be like this: 

Online moviesfree online moviestop online movieswatch online movies

So, we choose the related keywords for our web pages. In Keyword Research we will decide the targeted keywords for our webpage. The complete keyword Research has been clearly explained in this SEO Tutorial.

We use different tools for searching the keywords. Generally we prefer the free and best tools. In my opinion the free and best of all tools are:
  1. Google adwords keyword suggestion tool.
  2. Word tracker keyword tool.



COMPETITION ANALYSIS:


After choosing the keywords we prepare the competition analysis for the keywords. That means we search with the targeted keyword in the search engine and take the top 10 SERP listings. And prepare the analysis of each site of the top 10 list of the targeted keywords.

The competition analysis includes the no of BacklinksDomain ageSERP rankpage ranketc.

We will gather the information of every webpage which are listed in top 10 SERP lists. To gather these above details we use different types of tools like SEO QUAKE, SEO4FIREFOX etc.

Competition analysis tells us about the competition for the targeted keyword in the search engine. So by observing the competition analysis we can analyze many things like

How much time will it take to optimize our webpage in the top 10 results? And many more details.

For more detailed information about how to prepare the competition analysis see our further seo tutorial.


On-Page Factors:


Google search engine is now following these on-page factors strictly. Besides this Backlinks are needed for the better SERP rank.




After choosing the best keywords we will make sure that the targeted keywords are well optimized in the webpage.

The list of all On-Page Factors:

Keyword should 
  • present in the title tag
  • be written in meta keyword and meta description
  • be available in the file name or web page name.
  • be available in the content of the web page.
  • be written in h1 tag at least once.
                   and
  • Keyword density of the targeted keyword should be around 2% to 7% for better results.

To learn the On-page optimization completely, go to OnPage part on this SEO Tutorial.


Off-Page Factors:


This is the final part in the SEO Tutorial. After completing all the above factors we will start doing the off-page Optimization.



This is a time taking process. We had clearly mentioned all the Off-Page factors in this SEO Tutorial.

It includes:-

  1. Blog submissions
  2. Classified submissions
  3. Press releases
  4. Social bookmarking
  5. Article submissions
  6. Forum postings
  7. Link building
  8. Video submissions
  9. Directory submissions
  10. Local business center submissions

Thursday, March 7, 2019

WebAuthn

WebAuthn -Web Authentication

Abstract 


This specification defines an API enabling the creation and use of strong, attested, scoped, public key-based credentials by web applications, for the purpose of strongly authenticating users. Conceptually, one or more public key credentials, each scoped to a given WebAuthn Relying Party, are created by and bound to authenticators as requested by the web application. The user agent mediates access to authenticators and their public key credentials in order to preserve user privacy. Authenticators are responsible for ensuring that no operation is performed without user consent. Authenticators provide cryptographic proof of their properties to Relying Parties via attestation. This specification also describes the functional model for WebAuthn conformant authenticators, including their signature and attestation functionality.

About WebAuthn

For More Reference - duo.com
WebAuthn is a browser-based API that allows for web applications to create strong, public key-based credentials for the purpose of user authentication. It was officially ratified by the W3C (World Wide Web Consortium) in April of this year, and we’ve seen tremendous movement and support by major browsers ever since. Mozilla Firefox was first with support for WebAuthn and Google added Chrome support just last month. Microsoft’s Edge browser is also expected to add support later this year.

Immediately, WebAuthn can be used to support Universal Second Factor (U2F) security keys. However, as laptops with biometric authenticators become increasingly ubiquitous in enterprise environments, it will be used primarily for biometric authentication.

How does WebAuthn Work?


WebAuthn is an API that makes it very easy for a relying party, such as a web service, to integrate strong authentication into applications using support built in to all leading browsers and platforms. This means that web services can now easily offer their users strong authentication with a choice of authenticators such as security keys or built-in platform authenticators such as biometric readers.

How WebAuthn Works?

  • User registers to a web service
  • User chooses an authenticator
  • User authenticates to the web service
  • Rapid recovery from lost/stolen device


The Security Key by Yubico does not require any additional software or drivers to use. It contains no batteries and will work in Chrome with any website or application that supports the U2F specification. The device is incredibly robust in normal use, and even some abnormal use.

Yubico is a board level member of the FIDO alliance leading the specification development of U2F; our devices are the reference authenticators for the U2F standard. 
Yubico's FIDO U2F Security Key is a hardware authenticator with secure element supporting the Universal Second Factor (U2F) standard co-invented by Yubico and now hosted by the FIDO Alliance. It allows users to authenticate to all their U2F-enabled services and applications with one device. The Security Key by Yubico employs a secure element used to generate secrets and securely store them. The U2F protocol specifies that a new key pair is generated by the authenticator for each service, with the public key shared with that service and the private key only available to the Security Key's secure element. The authenticator provides no identifiable data to the service provider, maintaining your privacy between services.



  • Prevents unauthorized access by requiring the physical presence of the key to log in on that device
  • Lug it in and touch the gold button or edge
  • No codes to type or apps to install
  • Use it on Microsoft Windows, mac os x, Linux, and chrome os for Chromebooks
  • Fits nicely on a keychain, in a wallet, or inside a USB port Crushproof and water-resistant, no batteries or moving parts
 Two-factor authentication (2FA) is also referred to as two-step verification (2SV). It adds an extra layer of security to your account that you, and only you, can access in order to prove your identity. Millions of people use YubiKeys for 2FA because they're the easiest to use, super secure. Plug in your YubiKey and tap it to log in to your computers, networks, and online services. Keep one on your keychain with the key to your house/car, and a second YubiKey in a safe place as a backup.

Download WebAuthn:
https://www.w3.org/2018/Talks/06-WebAuthn.pdf

Reference:
https://www.w3.org/TR/2019/REC-webauthn-1-20190304/
https://www.yubico.com/webauthn/

Buy Now Amazon

Tuesday, March 5, 2019

Seminar Topics on Deep Learning

Deep learning - Seminar topics

Deep learning (also known as deep structured learning or differential programming) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.





Types of Data

Deep learning can be applied to any data type. The data types you work with, and the data you gather, will depend on the problem you’re trying to solve.

Sound (Voice Recognition)
Text (Classifying Reviews)
Images (Computer Vision)
Time Series (Sensor Data, Web Activity)
Video (Motion Detection)


What is deep learning?


The field of artificial intelligence is essentially when machines can do tasks that typically require human intelligence. It encompasses machine learning, where machines can learn by experience and acquire skills without human involvement. Deep learning is a subset of machine learning where artificial neural networks, algorithms inspired by the human brain, learn from large amounts of data. Similarly to how we learn from experience, the deep learning algorithm would perform a task repeatedly, each time tweaking it a little to improve the outcome. We refer to ‘deep learning’ because the neural networks have various (deep) layers that enable learning. Just about any problem that requires “thought” to figure out is a problem deep learning can learn to solve.



Benefits or advantages of Deep Learning


  • Features are automatically deduced and optimally tuned for desired outcome. Features are not required to be extracted ahead of time. This avoids time consuming machine learning techniques.
  • Robustness to natural variations in the data is automatically learned.
  • The same neural network based approach can be applied to many different applications and data types.
  • Massive parallel computations can be performed using GPUs and are scalable for large volumes of data. Moreover it delivers better performance results when amount of data are huge.
  • The deep learning architecture is flexible to be adapted to new problems in the future.


Drawbacks or disadvantages of Deep Learning


  • It requires very large amount of data in order to perform better than other techniques.
  • It is extremely expensive to train due to complex data models. Moreover deep learning requires expensive GPUs and hundreds of machines. This increases cost to the users.
  • There is no standard theory to guide you in selecting right deep learning tools as it requires knowledge of topology, training method and other parameters. As a result it is difficult to be adopted by less skilled people.
  • It is not easy to comprehend output based on mere learning and requires classifiers to do so. Convolutional neural network based algorithms perform such tasks.


Source:

https://en.wikipedia.org/wiki/Deep_learning
https://pathmind.com/wiki/data-for-deep-learning
https://www.forbes.com/sites/bernardmarr/2018/10/01/what-is-deep-learning-ai-a-simple-guide-with-8-practical-examples/#495774dd8d4b
https://www.rfwireless-world.com/Terminology/Advantages-and-Disadvantages-of-Deep-Learning.html