Skip to content
  • 0 Votes
    1 Posts
    63 Views
    No one has replied
  • 0 Votes
    3 Posts
    219 Views
    Khuram ShahzadK

    Thanks for shairing

  • 1 Votes
    6 Posts
    667 Views
    zareenZ

    @zareen said in CS403 GDB1 Solution and discussion:

    Now would you normalize the database or keep your database in de-normalized form.

    Although, denormalized schema can greatly improve performance under extreme read-loads but the updates and inserts become complex as the data is duplicate and hence has to be updated/inserted in more than one places.

    One clean way to go about solving this problem is through the use of triggers. For example in our case where the orders table has the product_name column as well, when the value of product_name has to be updated, then it can simply be done in the following way:

    Have a trigger setup on the products table that updates the product_name on any update to the products table. Execute the update query on the products table. The data would automatically be updated in the orders table because of the trigger.

    However, when de-normalizing the schema, do take into consideration, the number of times you would be updating records compared to the number of times you would be executing SELECTs. When mixing normalization and de-normalization, focus on de-normalizing tables that are read intensive, while tables that are write intensive keep them normalized.

    link text

  • 0 Votes
    3 Posts
    2k Views
    M

    The large energy cost of memory fetches limits the overallefficiency of applications no matter how efficient the ac-celerators are on the chip. As a result the most importantoptimization must be done at the algorithm level, to reduce off-chip memory accesses, to createDark Memory. The algorithmsmust first be (re)written for both locality and parallelism beforeyou tailor the hardware to accelerate them.Using Pareto curves in theenergy/opandmm2/(op/s)spaceallows one to quickly evaluate different accelerators, memorysystems, and even algorithms to understand the trade-offsbetween performance, power and die area. This analysis isa powerful way to optimize chips in the Dark Silicon era.

  • 0 Votes
    5 Posts
    879 Views
    zaasmiZ

    Actually, it’s difficult to compare the cryptographic strengths of symmetric and asymmetric key encryptions. Even though asymmetric key lengths are generally much longer (e.g. 1024 and 2048) than symmetric key lengths (e.g. 128 and 256), it doesn’t, for example, necessarily follow that a file encrypted with a 2048-bit RSA key (an asymmetric key) is already tougher to crack than a file encrypted with a 256-bit AES key (a symmetric key).

    Instead, it would be more appropriate to compare asymmetric and symmetric encryptions on the basis of two properties:

    Their computational requirements, and

    Their ease of distribution

    Symmetric key encryption doesn’t require as many CPU cycles as asymmetric key encryption, so you can say it’s generally faster. Thus, when it comes to speed, symmetric trumps asymmetric. However, symmetric keys have a major disadvantage especially if you’re going to use them for securing file transfers.

    Because the same key has to be used for encryption and decryption, you will need to find a way to get the key to your recipient if he doesn’t have it yet. Otherwise, your recipient won’t be able to decrypt the files you send him. However way you do it, it has to be done in a secure manner or else anyone who gets a hold of that key can simply intercept your encrypted file and decrypt it with the key.

    The issue of key distribution becomes even more pronounced in a file transfer environment, which can involve a large number of users and likely distributed over a vast geographical area. Some users, most of whom you may never have met, might even be located halfway around the world. Distributing a symmetric key in a secure manner to each of these users would be nearly impossible.

    Asymmetric key encryption doesn’t have this problem. For as long as you keep your private key secret, no one would be able to decrypt your encrypted file. So you can easily distribute the corresponding public key without worrying about who gets a hold of it (well, actually, there are spoofing attacks on public keys but that’s for another story). Anyone who holds a copy of that public key can encrypt a file prior to uploading to your server. Then once the file gets uploaded, you can decrypt it with your private key.

  • 0 Votes
    5 Posts
    159 Views
    zareenZ

    @zareen said in CS202 GDB1 Solution and discussion:

    Use AJAX

    AJAX is a technique for creating fast and dynamic web pages. AJAX allows web pages to be updated asynchronously by exchanging small amounts of data with the server behind the scenes. This means that it is possible to update parts of a web page, without reloading the whole page.

  • 0 Votes
    4 Posts
    251 Views
    M

    A nanopore is a nano-scale hole. In its devices, Oxford Nanopore passes an ionic current through nanopores and measures the changes in current as biological molecules pass through the nanopore or near it. The information about the change in current can be used to identify that molecule.

    Holes can be created by proteins puncturing membranes (biological nanopores) or in solid materials (solid-state nanopores).

  • 0 Votes
    2 Posts
    117 Views
    M

    Methods for Protein Analysis

    Protein Separation Methods Western Blotting Protein Identification Methods
    3A. Edman degradation
    3B. Mass spectrometry

    However, and perhaps remarkably, if as shown above we digest samples of ourprotein with as few as two or three different endoproteases with different
    specificities, we can usually use the resulting digestion patterns (again, analyzedby mass spectrometry to provide highly accurate determination of the fragmentmolecular weights) to produce a unique identification of our unknown protein:Again, mass spectrometry is uniquely well-suited for such analyses because itcan yield very accurate determinations of molecular weights from very even verysmall amounts of fragments resulting from the digestion of a particular protein.Techniques have now been developed by which proteins separated in two-dimensional gels can be digested within the gels and then injected directly into amass spectrometer for analysis of the resulting fragments.

  • 0 Votes
    2 Posts
    121 Views
    M

    Freeze drying

    Freeze drying (lyophilization) is a dehydration process which allows water to sublimate directly from solid phase to vapour phase at and below the freezing temperature of the material. Sub-atmospheric pressure (< 40 Pa) is maintained in most freeze-drying operations and the condensed water is immediately removed (Pikal, 2007). Freeze drying has been for decades one of the most preferred preservation methods for culture collection maintenance (Morgan et al., 2006). Due to high viability losses, an initial bacterial load of greater than 107 viable cells/mL has been recommended to ensure sufficient cells survive the freeze-drying process, thereby giving better success in storage, reconstitution and propagation (Bozoglu et al., 1987).

    At commercial scale, operational and capital costs of freeze drying are very high. The freeze-drying process operates in batch mode and requires long drying times and large drying units to achieve mass production. Even so, freeze drying is currently the only drying method used at commercial scale for production of starter cultures intended for use as primary acid producers in dairy fermentations.

    It is reported that the majority of bacterial death occurring during freeze drying happens during the freezing stage before the drying (sublimation) process commences. A slow freezing rate leads to higher bacterial death in the subsequent sublimation stage (Uzunova-Doneva and Donev, 2002). Rapid freezing, with formation of smaller ice crystals, favours better bacterial survival. On the other hand, formation of large ice crystals during slow freezing causes structural and physiological injury to the bacterial cells and causes damage to cell membranes that cannot be repaired upon subsequent drying or rehydration (Gardiner et al., 2000).

    Many studies have exploited the addition of ‘protectant’ substances to enhance survival, and have investigated the use of low-cost food ingredients as protectants rather than substances such as glycine betaine (Cleland et al., 2004). Recent examples include work by Jagannath et al. (2010), who studied the survival of various probiotic bacteria after freeze drying. The survival obtained ranged from 67% to 70% depending on bacterial species. Zamora et al. (2006) compared the survival of twelve strains of lactic acid bacteria after freeze drying and reported a range from 3.3% to 100% depending on the bacterial type and protectant type used. For example, the survival of four strains of Lactococcus garviae was reported to be 100% when non-fat skim milk was used as the protectant (Zamora et al., 2006). Reddy et al. (2009) studied survival of three probiotic lactic acid bacteria with eleven different protectants (at various solids concentrations), and suggested that these protected not only the viability of the probiotic lactic acid bacteria but also their functional properties.

  • 0 Votes
    5 Posts
    2k Views
    zareenZ

    GDB: Fall-2019

    In machine-learning problem space can be represented through concept space, instance space version space and hypothesis space. These problem spaces used the conjunctive space and is very restrictive one and also in the above-mentioned representations of problem spaces, it is not sure that the true concept lies within conjunctive space.

    GDB Topic:

    Discuss the case if we have a bigger search space and want to overcome the restrictive nature of conjunctive space, then how can we represent our problem space. Secondly in a given scenario which algorithm is used for our problem space to represent the learning problem.

  • 0 Votes
    3 Posts
    327 Views
    zaasmiZ

    @zaasmi Mobile operating system like Android, ios, kindal, Bada, Black berry, Microsoft is an open mobile operating system with massive user/employee base and simplified mobile app development process. Enterprises are leveraging Android or etc and creating custom mobile apps that solves problems and increase value for their business. The mobile os is best for the organization because the every person have the smart phone, and the app is the Develop in low cost.

  • 0 Votes
    4 Posts
    360 Views
    zaasmiZ

    @zaasmi

    Check out this link for more details

  • 0 Votes
    1 Posts
    350 Views
    No one has replied
  • 0 Votes
    1 Posts
    155 Views
    No one has replied
  • 0 Votes
    1 Posts
    143 Views
    No one has replied
  • 0 Votes
    1 Posts
    213 Views
    No one has replied