首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper is a review of the changes brought about in the magnetic properties of “iron” during the period 1870 to 1928 and shows the absurdity of using “iron” as a standard for comparison. The latest (1928) value for the initial permeability (μ0) of “iron” is given as 1150, its maximum permeability (μmax) as 61,000, and its hysteresis loss (Wh) as 300 ergs per cubic centimeter per cycle for B = 10,000 gausses. The corresponding values prior to 1900 were: μ0 = 250 μmax = 2600, Wh = 3,000.  相似文献   

2.
The size distribution data obtained by screen analysis of a non-uniform substance cannot be used directly in calculating the various average diameters of the material since this method of analysis gives the distribution in terms of weight rather than count. The distribution curve by weight however bears a definite relation to the regular size-frequency curve and suitable transformation equations are presented by means of which one is able to calculate the various average diameters from the parameters of the curve given by the screen analysis. The screens are calibrated in terms of the actual size of the material retained rather than the dimensions of the screen openings or an arbitrary method of calculating the “size of separation.”  相似文献   

3.
Not many years ago it was quite generally believed that iron was unable to follow rapid magnetic changes. Experiments which showed an apparent decrease in the permeability of the iron with an increase in the frequency of the magnetic cycle furnished a basis for a theory that iron was magnetically sluggish. Further and more accurate experiments proved, however, that the effects which had previously been ascribed to a peculiarity of the material were in reality caused by eddy currents in the sample. Theoretical calculations were made which demostrated that eddy currents in an iron test piece increased as the square of the frequency and that for even the lower frequencies it was necessary to use quite thin laminations in magnetic circuits in order to eliminate deleterious effects. Furthermore, it was found that due to eddy currents and the magnetic properties of iron, the magnetization in high frequency fields was confined to a thin surface layer of the piece. This “Magnetic Skin Effect” reduced the cross section of the iron which was magnetically active even though the laminations were extremely thin. Careful experimental measurements compared with theoretical calculations proved that the real permeability of iron remained unchanged at frequencies up to about 106 and that previous results had been is serious error due to neglect of the factors mentioned. This fact having been established, efforts were made to see what practical use could be made of iron in high frequency work and to that end some extensive experimental investigations of the saturation curves and core losses were made upon specimens laminated as thinly as was commercially practicable. The resulting data have furnished a basis for design.It is a demostrated fact that the permeability of all metals is unity for the magnetic cycles imposed upon them by heat and light waves. In the region between frequencies of about 106, where the true permeability of iron is practically the same as at zero frequency, and frequencies of about 1010 where the true permeability of iron approaches unity, the experimental values of μ decrease smoothly with the frequency. What happens to μ in the range of frequencies between the lingest heat waves and the shortest Hertzian waves which have yet been made is a question which has many interesting features but which has not yet yielded to the experimenter.  相似文献   

4.
The principle of the grid-controlled arc or thyratron is briefly described and the norminal ratings as regards filament current, maximum plate current etc. of four important thyratrns are given in table form. Methods of measuring the grid current, critical grid potential, etc., with D.C. power supply are given along with the results obtained on the General Electric Company thyratrons FG-17, FG-27 and FG-67. Characteristics obtained with A.C. power supply are also shown for these thyratrons and some of the relative advantages of the “phase-shift” and the “critical potential” methods of control are discussed when used in connection with photoelectric cell circuits. The A.C. measurements seem to show that a time of 10?3 second is required to start a thyratron. An amplifier circuit is shown by which it is theoretically possible to control a thyratron circuit using an input current to the amplifier of 10?11 ampere.  相似文献   

5.
The solution of the differential equation y″ + 2Ry′ + n2y = E cos pt is written in a new form which clearly exhibits many important facts thus far overlooked by theoretical and experimental investigators. Writing s = n ? p, and Δn = n ? √n2 ? R2, it is found: (a) When s ≠ Δn, there are “beats,” and the first “beat” maximum is greater than any later maximum while the first “beat” minimum is less than any later “beat” minimum. The “beat” frequency is (s ? Δn). (b) When n2 ? p2 = R2, there are no “beats,” and the resultant amplitude grows monotonically from zero to the amplitude of the forced vibration, (c) At resonance, when n = p, we still have maxima which occur with a frequency Δn in a damped system. (d) The absence of “beats” is neither a sufficient nor a necessary condition for resonance in a damped system.In the experimental investigation the upper extremity of a simple pendulum was moved in simple harmonic motion and photographic records obtained of the motion of the pendulum bob. Different degrees of damping were used, ranging from very small to critical.The experimental results are in excellent agreement with theory.  相似文献   

6.
As a non-invasive therapeutic method without penetration-depth limitation, magnetic hyperthermia therapy (MHT) under alternating magnetic field (AMF) is a clinically promising thermal therapy. However, the poor heating conversion efficiency and lack of stimulus–response obstruct the clinical application of magnetofluid-mediated MHT. Here, we develop a ferrimagnetic polyethylene glycol-poly(2-hexoxy-2-oxo-1,3,2-dioxaphospholane) (mPEG-b-PHEP) copolymer micelle loaded with hydrophobic iron oxide nanocubes and emodin (denoted as EMM). Besides an enhanced magnetic resonance (MR) contrast ability (r2 = 271 mM−1 s−1) due to the high magnetization, the specific absorption rate (2518 W/g at 35 kA/m) and intrinsic loss power (6.5 nHm2/kg) of EMM are dozens of times higher than the clinically available iron oxide nanoagents (Feridex and Resovist), indicating the high heating conversion efficiency. Furthermore, this composite micelle with a flowable core exhibits a rapid response to magnetic hyperthermia, leading to an AMF-activated supersensitive drug release. With the high magnetic response, thermal sensitivity and magnetic targeting, this supersensitive ferrimagnetic nanocomposite realizes an above 70% tumor cell killing effect at an extremely low dosage (10 μg Fe/mL), and the tumors on mice are completely eliminated after the combined MHT–chemotherapy.  相似文献   

7.
Throughout this paper data have been presented showing that the apparent inconsistency of the reported dielectric strength behavior of insulating liquids can be satisfactorily correlated if proper consideration be given to the state of the “purity” of the liquid itself. As a result it is suggested that insulating liquids should be classified as (a) “pure,” indicating those liquids free from dissolved gases as primary “impurities”; and (b) “impure,” including those liquids which contain dissolved gas. The breakdown mechanism depends on the distinctive behavior of these two general classes. “Pure” liquid breakdown is a function of charged particle formation. In part, this may be caused by the assumption of a charge by molecular aggregates, colloidal-like in nature. In part, the charge may arise from molecular ionization by collision. The latter occurs chiefly in the voltage range immediately preceding electrical rupture and is the chief cause of “pure” liquid insulation failure. The presence of the first type of charge—that is, the existence of a difference of potential between molecular aggregates and the liquid—is chiefly responsible for the variation in the time factor to breakdown.The breakdown of “impure” liquids is a function of dissolved gas elimination. This dissolved gas is eliminated as a result of changing solubility produced (a) by electro-striction effects, or (b) by changing pressure or temperature. The presence of secondary impurities such as dust particles and fibers, acts chiefly through the effect on increasing gassing tendencies.It is suggested further that the localization of dielectric breakdown in liquids, irrespective of the type or degree of “purity,” is chiefly in the “neutral membrane” located near the electrodes and formed by the discharge of particles. Such a “neutral membrane” results in a space charge effect giving marked drop in potential and as a result promoting ionization by collision effects in “pure” liquids and electro-striction effects in “impure” liquids.  相似文献   

8.
In a servomechanism using a two-phase alternating current control motor, a 90° difference is required in the phases of the carrier-frequency voltages applied to the fixed and control windings. This part describes and compares various methods of obtaining the phase difference.The question of the possibility of a phase-shifting proportional-derivative parallel “T” is answered in the negative, by the result that in any parallel “T” transfer characteristic, if the quadratic factor in the numerator is of the proportional-derivative form at the correct resonant frequency, the amount of phase shift which may be obtained from the remaining portion of the transfer characteristic is less than are tan (2n), where n is twice the carrier frequency divided by notch width. Thus for values of n high enough to have an appreciable stabilizing effect, the maximum obtainable intrinsic phase shift is negligible.In order to obtain a large phase shift it is necessary to add either a series input or a load impedance to the parallel “T,” or to use a phase-shifting network preceding or following the parallel “T.” Formulae and design charts are given for determination of the values of the components of phase lag networks.The method of calculation of tolerance requirements on the components, in terms of allowable deviation from the correct phase, is illustrated by an example of a phase lag network used in conjunction with a bridge “T” proportional-derivative network.  相似文献   

9.
In an alternating current servomechanism, the error is proportional to the modulation envelope of a modulated-carrier error signal. It is shown in part I that for stability and fidelity of the servo, it is highly desirable that the effect of the controller includes a proportional-derivative action on the modulation envelope. This action may be obtained with various forms of RC networks, including the parallel “T,” bridge “T,” and Wien Bridge forms.This part contains detailed design procedures and tables of values for the various types of proportional-derivative networks. Several forms of parallel “T” networks arise from the fact that there are five independent time constants in the network, while in order to realize the desired transfer characteristic it is necessary to impose only four conditions. It is indicated how the remaining degree of freedom may be used to obtain the most suitable input and output impedances for the source and load impedances with which the parallel “T” is to be used. The derivations for the parallel “T” formulae are given in an Appendix.Tolerance requirements on the components of parallel “T” and bridge “T” networks are derived. If ±1 per cent components are used at 60 cycles, the resonant frequency will lie between 56.4 and 63.6 cycles, and the notch width (rejection band width) will be within ±0.99 cps. of the correct value. In order to guarantee that the phase shift at 60 cycles is within ±10°, the percentage deviation of each part must be less than (9.0Tdω0), where ω0 is the carrier angular frequency, Td the derivative time constant.  相似文献   

10.
In this paper, necessary and sufficient conditions are derived for the existence of temporally periodic “dissipative structure” solutions in cases of weak diffusion with the reaction rate terms dominant in a generic system of reaction-diffusion equations ?ci/?t = Di?2ci+Qi(c), where the enumerator index i runs 1 to n, ci = ci(x, t) denotes the concentration or density of the ith participating molecular or biological species, Di is the diffusivity constant for the ith species and Qi(c), an algebraic function of the n-tuple c = (c1,\3., cn), expresses the local rate of production of the ith species due to chemical reactions or biological interactions.  相似文献   

11.
Determining an input matrix, i.e., locating predefined number of nodes (named “key nodes”) connected to external control sources that provide control signals, so as to minimize the cost of controlling a preselected subset of nodes (named “target nodes”) in directed networks is an outstanding issue. This problem arises especially in large natural and technological networks. To address this issue, we focus on directed networks with linear dynamics and propose an iterative method, termed as “L0-norm constraint based projected gradient method” (LPGM) in which the input matrix B is involved as a matrix variable. By introducing a chain rule for matrix differentiation, the gradient of the cost function with respect to B can be derived. This allows us to search B by applying probabilistic projection operator between two spaces, i.e., a real valued matrix space RN?×?M and a L0 norm matrix space RL0N×M by restricting the L0 norm of B as a fixed value of M. Then, the nodes that correspond to the M nonzero elements of the obtained input matrix (denoted as BL0) are selected as M key nodes, and each external control source is connected to a single key node. Simulation examples in real-life networks are presented to verify the potential of the proposed method. An interesting phenomenon we uncovered is that generally the control cost of scale free (SF) networks is higher than Erdos-Renyi (ER) networks using the same number of external control sources to control the same size of target nodes of networks with the same network size and mean degree. This work will deepen the understanding of optimal target control problems and provide new insights to locate key nodes for achieving minimum-cost control of target nodes in directed networks.  相似文献   

12.
BackgroundLXYL-P1-2 is the first reported glycoside hydrolase that can catalyze the transformation of 7-β-xylosyl-10-deacetyltaxol (XDT) to 10-deacetyltaxol (DT) by removing the d-xylosyl group at the C-7 position. Successful synthesis of paclitaxel by one-pot method combining the LXYL-P1-2 and 10-deacetylbaccatin III-10-β-O-acetyltransferase (DBAT) using XDT as a precursor, making LXYL-P1-2 a highly promising enzyme for the industrial production of paclitaxel. The aim of this study was to investigate the catalytic potential of LXYL-P1-2 stabilized on magnetic nanoparticles, the surface of which was modified by Ni2+-immobilized cross-linked Fe3O4@Histidine.ResultsThe diameter of matrix was 20–40 nm. The Km value of the immobilized LXYL-P1-2 catalyzing XDT (0.145 mM) was lower than that of the free enzyme (0.452 mM), and the kcat/Km value of immobilized enzyme (12.952 mM s−1) was higher than the free form (8.622 mM s−1). The immobilized form maintained 50% of its original activity after 15 cycles of reuse. In addition, the stability of immobilized LXYL-P1-2, maintained 84.67% of its initial activity, improved in comparison with free form after 30 d storage at 4°C.ConclusionsThis investigation not only provides an effective procedure for biocatalytic production of DT, but also gives an insight into the application of magnetic material immobilization technology.How to citeZou S, Chen TJ, Li DY, et al. LXYL-P1-2 immobilized on magnetic nanoparticles and its potential application in paclitaxel production. Electron J Biotechnol 2021;50.https://doi.org/10.1016/j.ejbt.2020.12.005  相似文献   

13.
Certain inequalities are presented, related to the L2 norms of the solutions to the vibrating string and heat conduction partial differential equations; in particular, an “L2 maximum principle” is derived for the heat equation, and similar inequalities for the vibrating string problem.  相似文献   

14.
The secondary electron emission from alkaline-earth oxide-coated cathodes has been investigated under both continuous and pulsed bombardment. Various factors affecting the yield, such as dependence upon primary voltage, collecting voltage, temperature, time, and angle of incidence, are noted, and the present state of the theory is discussed.Experiments have been performed with three types of apparatus. Yield vs. Energy data reveal values of δ of 4–7 at room temperature, with a more or less flat maximum at approximately 1,000 volts primary energy.The yield increases with temperature in an exponential manner, and plots of log Δδ (i.e. δK° ? δ300°K) vs. 1/T give straight lines. Values of Q1 between 0.9 – 1.5 eV. are generally indicated, and from extrapolation of these curves, yields exceeding 100 at 850° C. are deduced. The secondary emission depends upon the degree of activation, and increases with enhancement of the thermionic emission characteristics. Short-time effects such as growth or decay of secondary current after the onset of primary bombardment or persistence after the cessation of bombardment have not been observed, and values of yield obtained by pulsed methods are in accord with those obtained under D.C. conditions. Tail phenomena reported by J. B. Johnson and interpreted as “enhanced thermionic emission” from oxidecoated cathodes become manifest only under experimental conditions characterized by certain space-charge effects, and have been effectively simulated by bombarding a tantalum target adjacent to an electron-emitting tungsten filament. Various measurements of the energy distribution of secondary electrons as a function of primary voltage and temperature have been obtained. It was observed that the average energy of the secondary electrons decreases with temperature at a rate which more than compensates for the increase in the number of secondaries emitted per incident primary. The mechanism of the observed dependence of yield upon temperature is not well understood. Various alternative explanations are discussed and, in the light of the present state of our knowledge, regarded as untenable.  相似文献   

15.
The performance of a microscale aluminum nitride piezoelectric resonator in the shape of a trampoline is analyzed using three-dimensional finite element simulations. The air-suspended resonator is supported by beams and is designed to respond to longitudinal through-thickness vibrations. The device is targeted to operate at UHF frequencies (3 GHz) suitable for wireless filtering applications. Energy loss due to material damping is accounted for in the model. Other sources of damping are considered. We analyze if and how the material thickness, number of beams and beam length affect the resonator performance. This is intended to provide useful information at the design stages and eliminate the high costs associated with manufacturing a filter with poor performance. Performance is evaluated by means of the electromechanical coupling coefficient (K2) and the quality factor (Q) calculated from the electrical impedance frequency response plots. The results indicate that (i) K2 is insensitive to geometry (K2~6.5%), (ii) Q increases linearly with the AlN thickness attaining Q~1900 for a 1.7 μm thick resonator and (iii) a trampoline resonator with three beams has a better performance capability than the resonator with four or eight beams with a figure of merit K2Q~120 and resonating at a higher frequency value than its counterparts resonators, peaking at 3.21 GHz. The performance figures agree well with those predicted by a one dimensional theory. The value of K2 also agrees well with test data but that of Q is higher than the one recorded in the lab.  相似文献   

16.
This paper is a continuation of a previous paper published in this JOURNAL. The basic idea in the two papers is to enlarge the assemblage of thermodynamic states by including the so-called “metastable” states. Considering a system, in one or two phases, which has a single type of transformation, the writer develops an equation of state of the form η = a + by + cp + dpv + (e + fv + gp + hpv) In T, where p, v, T are three independent variables, and a, b, c, etc. are constants.The latent heat at p, T = constant is λP,T = I (v2 -v1) [b+db+(f+hp) InT), which is derived from the equation of state.The available thermodynamic data on ammonia and steam are used to check these equations. It is found that within the saturated region the agreement is quite satis factory, whereas for the superheated region the agreement is not so good.  相似文献   

17.
Cloud computing is now a global trend and during the past decade, has drawn attention from both academic and business communities. Although the evolution of cloud computing has not reached the maturity level, there is still adequate research about the topic. The main purpose of this paper is to examine the development and evolution of cloud computing over time. A content analysis was conducted for 236 scholarly journal articles, which were published between 2009 and 2014 in order to (i) identify the possible trends and changes in cloud computing over the six years, (ii) compare publishing productivity of journals about the cloud computing subject, and (iii) guide future research about cloud computing. The results show that the majority of the cloud computing research is about “cloud computing adoption” (19%), and it was followed by the “legal and ethical issues” of cloud computing (15%). It is also found that “cloud computing for mobile applications” (6%), “benefits & challenges of cloud computing” (5%) and “energy consumption dimension of cloud computing” (4%) are the least attention-grabbing themes in the literature. However, “cloud computing for mobile applications” and “energy consumption dimension of cloud computing” themes have become popular in the last two years, so they are expected to be trendy topics of the near future. Finally, another finding of this study is that the majority of the articles were published by engineering, information systems or technical journals such as “IT Professional Magazine,International Journal of Information Management” and “Mobile Networks and Applications”. It seems as if this topic is generally ignored by the managerial and organizational journals even though the impact of cloud computing on organizations and institutions is immense and is in need of investigation.  相似文献   

18.
Mechanically exfoliated two-dimensional ferromagnetic materials (2D FMs) possess long-range ferromagnetic order and topologically nontrivial skyrmions in few layers. However, because of the dimensionality effect, such few-layer systems usually exhibit much lower Curie temperature (TC) compared to their bulk counterparts. It is therefore of great interest to explore effective approaches to enhance their TC, particularly in wafer-scale for practical applications. Here, we report an interfacial proximity-induced high-TC 2D FM Fe3GeTe2 (FGT) via A-type antiferromagnetic material CrSb (CS) which strongly couples to FGT. A superlattice structure of (FGT/CS)n, where n stands for the period of FGT/CS heterostructure, has been successfully produced with sharp interfaces by molecular-beam epitaxy on 2-inch wafers. By performing elemental specific X-ray magnetic circular dichroism (XMCD) measurements, we have unequivocally discovered that TC of 4-layer Fe3GeTe2 can be significantly enhanced from 140 K to 230 K because of the interfacial ferromagnetic coupling. Meanwhile, an inverse proximity effect occurs in the FGT/CS interface, driving the interfacial antiferromagnetic CrSb into a ferrimagnetic state as evidenced by double-switching behavior in hysteresis loops and the XMCD spectra. Density functional theory calculations show that the Fe-Te/Cr-Sb interface is strongly FM coupled and doping of the spin-polarized electrons by the interfacial Cr layer gives rise to the TC enhancement of the Fe3GeTe2 films, in accordance with our XMCD measurements. Strikingly, by introducing rich Fe in a 4-layer FGT/CS superlattice, TC can be further enhanced to near room temperature. Our results provide a feasible approach for enhancing the magnetic order of few-layer 2D FMs in wafer-scale and render opportunities for realizing realistic ultra-thin spintronic devices.  相似文献   

19.
Assessment of the dielectrophoresis (DEP) cross-over frequency (fxo), cell diameter, and derivative membrane capacitance (Cm) values for a group of undifferentiated human embryonic stem cell (hESC) lines (H1, H9, RCM1, RH1), and for a transgenic subclone of H1 (T8) revealed that hESC lines could not be discriminated on their mean fxo and Cm values, the latter of which ranged from 14 to 20 mF/m2. Differentiation of H1 and H9 to a mesenchymal stem cell-like phenotype resulted in similar significant increases in mean Cm values to 41–49 mF/m2 in both lines (p < 0.0001). BMP4-induced differentiation of RCM1 to a trophoblast cell-like phenotype also resulted in a distinct and significant increase in mean Cm value to 28 mF/m2 (p < 0.0001). The progressive transition to a higher membrane capacitance was also evident after each passage of cell culture as H9 cells transitioned to a mesenchymal stem cell-like state induced by growth on a substrate of hyaluronan. These findings confirm the existence of distinctive parameters between undifferentiated and differentiating cells on which future application of dielectrophoresis in the context of hESC manufacturing can be based.  相似文献   

20.
This study identified the influence of the main concepts contained in Zipf's classic 1949 book entitled Human Behavior and the Principle of Least Effort (HBPLE) on library and information science (LIS) research. The study analyzed LIS articles published between 1949 and 2013 that cited HBPLE. The results showed that HBPLE has a growing influence on LIS research. Of the 17 cited concepts that were identified, the concept of “Zipf's law” was cited most (64.8%), followed by “the principle of least effort” (24.5%). Although the concept of “the principle of least effort,” the focus of HBPLE, was not most frequently observed, an increasing trend was evident regarding the influence of this concept. The concept of “the principle of least effort” has been cited mainly by researchers of information behavior and served to support the citing authors’ claims. By contrast, the concept of “Zipf's law” received the most attention from bibliometrics research and was used mainly for comparisons with other informetrics laws or research results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号