SC19

AtlanticWave-SDX (AW-SDX) demonstration at Supercomputing conference (SC19)

The AmLight-ExP Network Engineering Team participated in multiple collaborative SC19 Network Research Exhibitions at the Supercomputing Conference (SC19) which took place on November 17-22, 2019 at the Colorado Convention Center, in Denver, Colorado.
AmLight-ExP and AtlanticWave-SDX offered the academic community 630Gbps of upstream bandwidth, network auto-recovery and dynamic provisioning, network programmability, network telemetry, integration with SENSE project’s distributed orchestrator, and 100G DTNs.

The demonstration/presentations took place at the Caltech booth.

Title: Global Petascale to Exascale Workflows for Data Intensive Science Accelerated by Next Generation Programmable SDN Architectures and Machine Learning Applications
Lead: Harvey Newman (Caltech)
Abstract: The demonstration includes several of the latest major advances in software defined and Terabit/sec networks, intelligent global operations and monitoring systems, workflow optimization methodologies with real-time analytics, and state of the art long distance data transfer methods and tools and server designs, to meet the challenges faced by leading edge data intensive experimental programs in high energy physics, astrophysics, climate and other fields of data intensive science. The key challenges being addressed include: (1) global data distribution, processing, access and analysis, (2) the coordinated use of massive but still limited computing, storage and network resources, and (3) coordinated operation and collaboration within global scientific enterprises each encompassing hundreds to thousands of scientists.
AmLight Express and Protect (AmLight-ExP) (NSF Award #1451018) has been supporting the LSST, LHC, and many high-throughput low latency experiments using a Software-Defined Networking (SDN) that includes the new Monet submarine cable and a 100G ring network. The AmLight SDN was built to enhance collaboration, research, and education between the U.S. and South America.

Title: Big Data Express project demonstration
Lead: Wnedji Wu (Fermilab)
Abstract: Big data has emerged as a driving force for scientific discoveries. To meet data transfer challenges in big data era, DOE’s Advanced Scientific Computing Research (ASCR) office has funded the BigData Express project (http://bigdataexpress.fnal.gov). BigData Express is targeted at providing schedulable, predictable, and high-performance data transfer service for DOE’s large-scale science computing facilities and their collaborators. In this demo, we use BigData Express software to demonstrate bulk data movement over wide area networks. The following features in BigData Express will be demonstrated:

  • A peer-to-peer, scalable, and extensible model for data transfer services;
  • A visually appealing, easy-to-use web portal;
  • A high-performance data transfer engine;
  • Orchestrating and scheduling of system (DTN), storage, and network (SDN) resources involved in the file transfers.
  • On-Demand provisioning of end-to-end network paths with guaranteed QoS;
  • Robust data transfer services provisioning through strong error handling mechanisms;
  • Safe and secure data transfer services by using multiple security mechanisms;
  • The interoperation between BigData Express and SENSE;
  • Integration of BigData Express with scientific workflows.

The demonstrations and presentations are a collaborative effort from the following participating institutions:
California Institute of Technology (Caltech: High Energy Physics (HEP), Laser Interferometer Gravitational-Wave Observatory (LIGO), The Large Synoptic Survey Telescope (LSST), Caltech: Information Management Systems & Services (IMSS)), University of Southern California (USC), Florida International University (FIU), USC Information Sciences Institute (ISI), Georgia Institute of Technology (GeorgiaTech), Yale University, IBM, University of Maryland, Virnao, San Diego Supercomputer Center (SDSC), University of California San Diego (UCSD), Pacific Research Platform (PRP), Massachusetts Institute of Technology (MIT) , Energy Sciences Network (ESnet), National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory (LBNL), São Paulo State University (UNESP), Starlight Internet2, Johns Hopkins University, SURFnet, Fermi National Accelerator Laboratory (Fermilab), Argonne National Laboratory, University of Michigan, Northeastern University, Colorado State University, Tennessee State University, University of California Los Angeles (UCLA), Tata Institute of Fundamental Research (TIFR Mumbai), Corporation for Education Network Initiatives in California (CENIC: Pacific Wave), Academic Network at São Paulo (ANSP Brazil), National Education and Research Network (Rede Nacional de Ensino e Pesquisa – RNP), National University Network (Red Universitaria Nacional – REUNA Chile), Ciena.

About the SC Conference Series
Established in 1988, the annual SC conference continues to grow steadily in size and impact each year. Approximately 5,000 people participate in the technical program, with about 11,000 people overall.
SC has built a diverse community of participants including researchers, scientists, application developers, computing center staff and management, computing industry staff, agency program managers, journalists, and congressional staffers. This diversity is one of the conference’s main strengths, making it a yearly “must attend” forum for stakeholders throughout the technical computing community.
The technical program is the heart of SC. It has addressed virtually every area of scientific and engineering research, as well as technological development, innovation, and education. Its presentations, tutorials, panels, and discussion forums have included breakthroughs in many areas and inspired new and innovative areas of computing. (http://supercomputing.org/)

GRP2019

Global Research Platform Workshop (GRP)

Xin YufengThe Global Research Platform Workshop (GRP) and Global Network Advancement Group (GNA-G) Meeting was held on 17-19 September 2019 at the Calit2-Qualcomm Institute, University of California San Diego, La Jolla, United States. The GRP is an evolving effort, focused on design, implementation, operation strategies for next-generation distributed services, network infrastructure, and interoperable Science DMZs on a global scale to facilitate data transfer and accessibility. The GRP Workshop highlighted global science drivers and their requirements, high-performance data fabrics and distributed cyberinfrastructure, including advanced networks customized to support scientific workflows. This workshop brought together researchers, scientists, engineers, and network managers from U.S. and international research platform initiatives, to share best practices and advance the state of the art. GRP gathered 94 attendees representing 13 countries (Australia, Brazil, Canada, Czech Republic, Denmark, Germany, Japan, Korea, Netherlands, Poland, Singapore, Taiwan, the U.S).
More details about the workshop can be found here: http://grp-workshop-2019.ucsd.edu/program.html

For the AmLight ExP presentation, click here.
For the AtlanticWave-SDX presentation, click here.
For AmLight-SACS presentation, click here.

fabric-topology-with-logos

FABRIC project launches with $20 Million NSF grant to test a reimagined Internet

Florida International University (FIU) is a participating site for the FABRIC project as a Science Design Driver and Resource Provider.  Additionally, FABRIC will be connected to the AmLight network for international research collaborations with researchers in South America and the Caribbean.

Collaboration will establish a nationwide network infrastructure

The National Science Foundation (NSF) announced this week a collaborative project to create a platform for testing novel internet architectures that could enable a faster, more secure Internet.

FABRIC will provide a nationwide testbed for reimagining how data can be stored, computed and moved through shared infrastructure. FABRIC will allow scientists to explore what a new Internet could look like at scale and will help determine the internet architecture of the future.

A series of government-funded programs from the 1960s through the 1980s established the computer networking architectures that formed the basis for today’s internet. FABRIC will help test out new network designs that could overcome current bottlenecks and continue to extend the Internet’s broad benefits for science and society. FABRIC will explore the balance between the amount of information a network maintains, the network’s ability to process information, and its scalability, performance and security.

“The Internet has been a great enabler for many science disciplines and in people’s everyday lives, but it is showing its age and limitations, especially when it comes to processing large amounts of data. If computer scientists were to start over today, knowing what they now know, the Internet might be designed in a different way,” said Ilya Baldin, director of Network Research & Infrastructure at RENCI, the UNC-Chapel Hill institute that serves as the project’s lead institution.

“FABRIC represents large-scale network infrastructure where the Internet can be reimagined, and a variety of ideas can be tried out and compared. If FABRIC allows the research community to come up with ideas on how to reimagine the Internet based on a new set of architectural tradeoffs, then everybody wins – researchers and citizens alike,” said Baldin.

Today’s Internet was not designed for the massive data sets, machine learning tools, advanced sensors and Internet of Things devices that have become central to many research and business endeavors. FABRIC will give computer scientists a place to test networking and cybersecurity solutions that can better capitalize on these tools and potentially extend the Internet’s benefits to people in remote or underserved areas.
“We look forward to FABRIC enabling researchers throughout the nation to develop and test new networking technologies and capabilities,” said Erwin Gianchandani, acting assistant director for computer and information science and engineering at the National Science Foundation. “This project will lead to novel paradigms for next-generation networks and services, giving rise to future applications advancing science and the economy.”

FABRIC will consist of storage, computational and network hardware nodes connected by dedicated high-speed optical links. In addition to the interconnected deeply-programmable core nodes deployed across the country, FABRIC nodes will include major national research facilities such as universities, national labs and supercomputing centers that generate and process enormous scientific data sets. Such flexibility and control over the network functionality (at all points in the network) will allow experimenters to test new architectures not possible today. All major aspects of the FABRIC infrastructure will be programmable, so researchers can create new configurations or tailor the platform for specific research purposes, such as cybersecurity.

“We don’t know what’s the right balance between smarts, or how self-knowledgeable the Internet needs to be, and scalability and performance,” said Baldin. “What we are offering is an instrument where these questions can be studied and researchers can make real progress toward envisioning the Internet of the future.”

The core FABRIC team includes the University of Kentucky, the Department of Energy’s Energy Sciences Network (ESnet), Clemson University and the Illinois Institute of Technology. Contributors from the University of Kentucky and ESnet will be instrumental in designing and deploying the platform’s hardware and developing new software. Clemson and Illinois Institute of Technology researchers will work with a wide variety of user communities—including those focused on security, distributed architectures, scientific applications and data transfer protocols—to ensure FABRIC can serve their needs. In addition, researchers from many other universities will help test the platform and integrate their computing infrastructure and scientific instruments into FABRIC.

The construction phase of the project is expected to last four years, with the first year dedicated to software development, finalizing technical designs, and prototyping. Subsequent years will focus on rolling out the platform’s hardware to participating sites across the nation and connecting it to major national computing facilities. Ultimately, experimenter communities will be able to attach new instruments or hardware resources to FABRIC’s uniquely extensible design, allowing the infrastructure to grow and adapt to changing research needs over time.

To see the original Press release acticle please click here.

 

AmLight Express and Protect project adds three 200Gbps optical waves for Research and Education between the U.S. and Brazil

 

 

 

PRESS RELEASE

AmLight Express and Protect project adds three 200Gbps optical waves for Research and Education between the U.S. and Brazil

 

Miami, Florida, August 30, 2019 – Florida International University (FIU), Rede Nacional de Ensino e Pesquisa (RNP), Academic Network of Sao Paulo (ANSP), the Association of Universities for Research in Astronomy (AURA), and Angola Cables are pleased to announce the addition of three 200Gbps optical waves for research and education between the U.S. and Brazil.  These three 200Gbps optical waves represent the Express path of the AmLight Express and Protect (AmLight-ExP) project, a 5-year National Science Foundation (NSF) award to FIU (OAC-1451018), and with support from AURA, and the AmLight Consortium.

The Express path is built upon the Monet submarine cable system, linking the U.S. to Brazil, and operated by Angola Cables.  The approach chosen to add these new optical waves was through the use of optical spectrum, where Angola Cables assigned a total of 150GHz over the Monet submarine cable system to be used by the AmLight Consortium. The AmLight Consortium uses the 150GHz spectrum to create three 200Gbps optical waves between Boca Raton, Fortaleza, and Sao Paulo. Each optical wave enables the use of two 100Gbps client ports. The Express path is represented in the figure by the solid green segments, where each segment represents a 100Gbps link from the 200Gbps optical wave; the Protect path is represented by the other segments in the figure that form a ring around South America.

The AmLight Consortium built the Express path using CIENA generated waves over the Subcom constructed optical spectrum, a nascent approach that will provide the AmLight Consortium with the flexibility of upgrading the bandwidth capacity as optical technology advances. The spectrum will be available to the research and education community at least until 2032.  This is important for the Large Synoptic Survey Telescope (LSST), whose science mission will rely upon a robust network service that can provide the bandwidth that’s needed to transport 12.7 GB images within 5 seconds from the LSST Base site in La Serena, Chile to the archive site at the National Center for Supercomputing Applications (NCSA), in Urbana-Champaign Illinois for roughly 10 hours every night, 365 nights a year, over the 10 year period of the LSST survey. Starting in 2022, LSST will make about 1000 visits per night (and each visit includes 2 images) each night with its 3.2 billion pixel camera, recording the entire visible southern sky twice each week.

The combined Express and Protect paths form a path-diverse resilient high-performance network infrastructure, built to enable and support big science applications, such as astronomy and high-energy physics, operated by the AmLight Consortium. The AmLight Consortium members include Florida International University (FIU), the Academic Network of Sao Paulo (ANSP), the Rede Nacional de Ensino e Pesquisa (RNP) (Brazilian research and education network), the Red Universitaria Nacional (Reuna) (Chilean research and education network), the regional network of Latin America (RedCLARA), the Association of Universities for Research in Astronomy (AURA), Florida LambdaRail (FLR), Internet2, Telecom Italia Sparkle, and Angola Cables.

“The lighting of these waves is the culmination of a decade of work by the LSST international networking team and all the members of the AmLight Consortium. It is pioneering in both the public-private partnerships that enable it, as well as the novel flexible approach to submarine networking. It is an exciting time for the Research and Education Networking” Prof. Chip Cox, Co-Principal Investigator of the AmLight-ExP project.

The addition of the Express path to AmLight-ExP provides unprecedented bandwidth capacity to the research and education communities in the Americas,” said Dr. Julio Ibarra, the Principal Investigator of the AmLight-ExP project.

“AURA is immensely gratified to see the hard work of this collaboration with the AmLight consortium paying off with this major milestone. The unique promise of LSST science, a real time view of the dynamic night sky, depends on the high speed network from Chile to the USA, and this completes a major part of it” added Robert Blum, Director for LSST Operations.

According to Eduardo Grizendi, Director of the Directorate of Engineering and Operations of the Brazilian academic network RNP, “In addition to the benefits that Express Path brings to the entire academic community of the Americas, it should be noted that it also consolidates the successful partnership around AmLight Consortium built upon the Monet submarine cable system.”

“We are very proud and excited at being an active participant in this far-reaching scientific research project as it represents the real potential and value that our submarine cable networks can contribute to the knowledge and understanding not just of the world we live in, but the many worlds that lie beyond our solar system,” said Victor Costa, Angola Cables’ Regional Director, Brazil.

The Academic Network of Sao Paulo (ANSP) provides connectivity to more than fifty institutions, which are responsible for more than forty percent of Brazilian science production. The AmLight Consortium implementation of the AmLight Express network is a major milestone for the project underpinned by our partnership with RNP and FIU for 15+ years.

About Angola Cables: Angola Cables is an IT Solutions multinational focused on selling data infrastructure solutions, connectivity and cloud services for IP’s and ISP’s requiring digital connections and services in the corporate sector. The company currently operates the SACS, Monet and WACS cable systems and manages two data centers – AngoNAP Fortaleza in Brazil and AngoNAP Luanda in Angola. It also manages Angonix, one of Africa’s top five Internet Exchange Points. With its robust network, Angola Cables directly connects Africa, Europe and the Americas with established partnerships to connect to Asia (https://www.angolacables.co.ao/en/).

About ANSP: The Academic Network of São Paulo (ANSP) provides connectivity to the top R&E institutions, facilities, and researchers in the State of São Paulo, Brazil, including the University of São Paulo, the largest research university in South America. ANSP directly connects to AmLight in Miami. ANSP also provides connectivity to Kyatera, a 9-city dark-fiber-based optical network infrastructure linking 20 research institutions in the state and a number of special infrastructure projects like GridUNESP, one of the largest computational clusters in Latin America, supporting interdisciplinary grid-based science (www.ansp.br).

About AURA: The Association of Universities for Research in Astronomy (AURA) is a consortium of 40 US institutions and 4 international affiliates that operates world-class astronomical observatories. AURA’s role is to establish, nurture, and promote public observatories and facilities that advance innovative astronomical research. In addition, AURA is deeply committed to public and educational outreach, and to diversity throughout the astronomical and scientific workforce. AURA carries out its role through its astronomical facilities. (www.aura-astronomy.org)

About AMPATH: Florida International University’s Center for Internet Augmented Research and Assessment (CIARA), in the Division of IT, has developed an international, high-performance research connection point in Miami, Florida, called AMPATH (AMericasPATH; www.ampath.net). AMPATH extends participation to underrepresented groups in Latin America and the Caribbean, in science and engineering research and education through the use of high-performance network connections. AMPATH is home to the Americas Lightpaths Express and Protect (AmLight-ExP) high-performance network links connecting Latin America to the U.S., funded by the National Science Foundation (NSF), award #OAC-1451018; and the AtlanticWave-SDX: NSF Award# OAC- 1451024, 2015-2020, IRNC: RXP: AtlanticWave-Software Defined Exchange: A Distributed Intercontinental Experimental Software Defined Exchange (SDX) (www.ciara.fiu.edu)

About FIU: Florida International University is an urban, multi-campus, public research university serving its students and the diverse population of South Florida. FIU is committed to high-quality teaching, state-of-the-art research and creative activity, and collaborative engagement with its local and global communities. Fostering a greater international understanding, FIU is a major international education center with a primary emphasis on creating greater mutual understanding among the Americas and throughout the world. FIU is Miami’s first and only public research university, offering bachelor’s, master’s, and doctoral degrees. Designated as a top-tier research institution, FIU emphasizes research as a major component in the university’s mission (http://www.fiu.edu).

About LSST: Large Synoptic Survey Telescope (LSST) project activities are supported through a partnership between the National Science Foundation (NSF) and the Department of Energy. NSF supports LSST through a Cooperative Agreement managed by the Association of Universities for Research in Astronomy (AURA). The Department of Energy funded effort is managed by the SLAC National Accelerator Laboratory (SLAC). Additional LSST funding comes from private donations, grants to universities, and in-kind support from Institutional Members of LSSTC (http://www.lsst.org/).

About RNP: The Brazilian Education and Research Network (RNP), qualified as a Social Organization (OS) by the Brazilian government, is supervised by the Ministry of Science, Technology and Innovation (MCTI), and is maintained through the inter-ministerial RNP program, which also includes the Ministries of Education (MEC), Health (MS) and Culture (MinC). The first Internet provider in Brazil with national coverage, RNP operates a high-performance nationwide network, with points of presence in all 26 states and the national capital, providing service to over 1200 distinct locations. RNP’s more than four million users are making use of an advanced network infrastructure for communication, computation and experimentation, which contributes to the integration of the national systems of Science, Technology and Innovation, Higher Education, Health and Culture (http://www.rnp.br/en).

Media Contacts:

For Angola Cables: Jonathas Ruiz, Public Relation from Angola Cables
Address: Rua Oscar Freire, 379 – 17th floor – 171 – São Paulo – Brazil
Tel: +55 (11) 38942433
Email: jonathas.ruiz@grupovirta.com.br

For FIU: Vasilka Chergarova, Research Coordinator
Center for Internet Augmented Research and Assessment (CIARA)
Florida International University
Miami, FL 33199
Tel: +1 305-348-2006
Email: vchergar@fiu.edu

For RNP: Leonie Gouveia, Communications Coordinator
Brazilian National Research and Educational Network (RNP)
Rio de Janeiro, Rua Lauro Müller, 116, sala 1103 – Botafogo
Rio de Janeiro – RJ – 22290-906
Tel: +55 21 2102-4193
Email: leonie.gouveia@rnp.br

SubOptic Conference 2019

The AmLight team presented the collaborative paper “Mitigating soft failures using network analytics and SDN to support distributed bandwidth-intensive scientific instruments over international networks” at the SubOptic Conference 2019. The conference was held in New Orleans, LA on 8-11 April 2019. A triennial event, SubOptic is the longest running and most comprehensive conference series in the world for the submarine fiber industry.

Abstract:  With the consolidation of high-speed networks and worldwide scientific deployments, new experiments are being conducted remotely. The control and data gathering of these bandwidth-intensive mission-critical instruments require a reliable network infrastructure capable of reacting in real-time to soft failures, such as packet loss. To address the mission-critical real-time instruments’ Service Level Agreement (SLA), streaming telemetry and data-driven analytics are required. In recent years, the industry has created many open consortiums and specifications, such as OpenConfig and Inband Network Telemetry (INT). As a result, we have new levels of interconnections, interoperation, and disaggregation allowing Software-Defined Networking (SDN) applications to use protocol agnostic, common APIs, Artificial Intelligence and Machine Learning to create reliable and adaptive networks. This paper aims to present the ongoing effort to create an adaptive network infrastructure capable of identifying and isolating soft failures in an automated approach to optimize bandwidth-intensive data transfers. Our approach leverages the most recent solutions offered by the optical and packet layers using SDN and network analytics.

Authors: Jeronimo Bezerra, Julio Ibarra (Florida International University); David Boertjes, Franco Santillo, Lance Williford (CIENA); Heidi Morgan (University of Southern California); Chip Cox (Vanderbilt University); and Luiz Lopez (University of Sao Paulo)

For more details about SubOptic Conference 2019 click here.
To download a pdf copy of the publication click here.

i2_global_summit2019

Global 100G session at I2 Global Summit 2019

Highlights of the AmLight project were presented at Global 100G session at the Internet2 Global Summit on March 7 along with projects from ANA-100G, GÉANT, RedCLARA/RNP, and SINET. The session highlighted breakthrough initiatives enabling the first 100G research and education network around the world, pushing for greater connectivity around the globe. Efforts stemming from the Global Network Architecture (GNA), GÉANT and RedCLARA’s BELLA project directly connecting Europe and Latin America, Japan’s SINET 100G project are enabling researchers across the globe to realize 100G connectivity.
Dr. Julio Ibarra presented current status or regional network infrastructure and planned activities for continuing the building of express backbone and enhancing the resilience of the AmLight ExP project for 2019. The achievement of those goals will facilitate effective peering among academic networks and communities of interest in response to the network requirements from research communities and can succeed through cooperation and collaboration.

For video presentation click here.
For Dr. Julio Ibarra presentation click here.

sc18

AmLight Team at Supercomputing Conference (SC18)

sc18_04_JeronimoThe AmLight Team participated in multiple Network Research Exhibition (NRE) demonstrations at the International Conference for High-Performance Computing, Networking, Storage, and Analysis (SC18) held in Dallas, TX. NRE participants are invited to share the results of their demonstrations and experiments that display innovation in emerging network hardware, protocols, and advanced network-intensive scientific applications from the preceding year’s conference as part of the Innovating the Network for Data-Intensive Science (INDIS) workshop.

NRE-16: Global Petascale to Exascale Science Workflows Accelerated by Next Generation SDN Architectures and Applications

PrintWe demonstrated several major advances in software-defined and Terabit/sec networks, intelligent global operations and monitoring systems, workflow optimization methodologies with real-time analytics, and state of the art long distance data transfer methods and tools and server designs, to meet the challenges faced by leading-edge data intensive experimental programs in high energy physics, astrophysics, climate science including the Large Hadron Collider (LHC), the Large Synoptic Space Telescope (LSST), the Linac Coherent Light Source (LCLS II), the Earth System Grid Federation and others. Several of the SC18 demonstrations  included a fundamentally new concept of “consistent network operations,” where stable load balanced high throughput workflows crossing optimally chosen network paths, up to preset high water marks to accommodate other traffic, provided by autonomous site-resident services dynamically interacting with network-resident services, in response to demands from the science programs’ principal data distribution and management systems. This was empowered by end-to-end SDN methods extending all the way to autoconfigured Data Transfer Nodes (DTNs), including intent-based networking APIs combined with transfer applications such as Caltech’s open source TCP based FDT which have been shown to match 100G long distance paths at wire speed in production networks. During the demos, the data flows were steered across regional, continental and transoceanic wide area networks through the orchestration software and controllers, and automated virtualization software stacks developed in the SENSE, PRP, AmLight, Kytos, and other collaborative projects. The DTNs employed used the latest high throughput SSDs and flow control methods at the edges such as FireQoS and/or Open vSwitch, complemented by NVMe over fabric installations in some locations.
Download the final NRE-16 demo submission (PDF)
For Caltech press release click here.
For Caltech SC18 demonstration details click here.

NRE-17: Large Synoptic Survey Telescope (LSST) Real Time Low Latency Transfers for Scientific Processing Demonstrations

sc18_01At SC18 in Dallas, Texas we experimented with data transfer rates, using 100Gig FIONA Data Transfer Nodes (a.k.a. DTNs) in Chile and Illinois. The demos aimed to achieve three goals: First, we demonstrate real-time low latency transfers for scientific processing of multi-Gigabyte images from the LSST base station site in La Serena, Chile, flowing over the REUNA Chilean National Research & Education Network (NREN), as well as ANSP and RNP Brazilian national circuits and the AmLight-ExP Atlantic and Pacific Ring through AMPATH2 to Starlight and NCSA. Second, we simulated operational and data quality traffic to SLAC, Tucson and other sites including the Dallas show floor. Third, we stress tested the AmLight ExP network to simulate the LSST annual multi-petabyte Data Release from NCSA to La Serena at rates consistent with those required for LSST operations.
Download the final NRE-17 demo submission (PDF).
See the official LSST press release online.

NRE-18: Americas Lightpaths Express and Protect Enhances Infrastructure for Research and Education

Americas Lightpaths Express and Protect (AmLight ExP) enables research and education amongst the people of the Americas through the operation of production infrastructure for communication and collaboration between the U.S. and Western Hemisphere science and engineering research and education communities. AmLight ExP supports a hybrid network strategy that combines optical spectrum (Express) and leased capacity (Protect) that provides a reliable, leading-edge diverse network infrastructure for research and education.

AmLight-ExP supported the LSST and LHC-related use cases in association with high throughput low latency experiments, and demonstrations of auto-recovery from network events, using its 100G ring network that interconnects the research and education communities in the U.S. and South America. These use cases and demonstrations highlighted AmLight-ExP and its multifaced roles for networking in support of the collaborative work by many teams in the US and Latin America. In addition, the demonstrations featured the research and education networks participating in AmLight-ExP, referred to as the AmLight Consortium[1].

As part of this support, during the course of the LSST and AmLight ExP SC18 demonstrations, Dark Energy Camera (DECam) public data from the AURA site in Chile arrived via AmLight at both the KISTI and Caltech booths in Dallas, where it was mirrored and carried across SCinet, Starlight, KRLight and KREONet2 to DTNs at KISTI and KASI in Korea. Throughputs of 58 Gbps were achieved across the 60 Gbps path from the telescope site to the KISTI booth and a remarkable 99.7 Gbps on the 100 Gbps path between Dallas and Daejeon.
Download the final NRE-18 demo submission (PDF)
More details about the SC18 can be found here

[1] The AmLight Consortium is a group of not-for-profit universities, state, national and regional research and education networks including the AmLight ExP project at Florida International University, AURA, LSST, RNP, ANSP, Clara, REUNA, FLR, Telecom Italia Sparkle, and Internet2.

sc18_02

LSST 100 Gbps Network Demonstration at Supercomputing Conference 2018

image_from_ios_Jeff_KantorSC18November 20, 2018 – The LSST Network Engineering Team (NET) had a strong presence at the Supercomputing 2018 Conference (SC18) in Dallas, TX, last week, including a successful demonstration of the data transfer capabilities of the fiber optic networks that will be used during LSST operations. Digital data were transferred from the Base Site in La Serena, Chile, to the LSST Data Facility at the National Center for Supercomputing Applications (NCSA) in Champaign, IL. During the data transfer demonstration, a peak rate of 100 Gigabits/second (Gb/s) was achieved for short periods, and a sustained rate of 80 Gb/s was achieved over a three hour period, exceeding the test target. This test was run over links provisioned by several networking organizations: REUNA from La Serena to Santiago, FIU/Amlight from Santiago to Miami, SCInet from Miami to Chicago (Starlight), and NCSA from Chicago to Champaign. SCInet links provided by CenturyLink and internet2 were used to transfer the data from Miami to Chicago because LSST 100 Gb/s links will not be available in that path until FY20. All of the other links were those that will be used by LSST during operations.

Data Transfer Nodes (DTN) configured in La Serena and Champaign with nuttcp (a network performance measurement tool) generated a sustained memory-to-memory data rate over 80 Gb/s, over a period of three hours. Simultaneously, the DTNs, using the Fermilab Multicore-Aware Data Transfer Middleware (MDTM) software, achieved a peak of 36 Gb/s transferring 200 Gigabytes of DECam public data (FITS files) provided by the National Optical Astronomy Observatory (NOAO). Note that in LSST operations, there will be over 20 DTNs (aka archiver/forwarders) simultaneously sending data, so each one will require far less than 36 Gb/s. In addition, on the Champaign end the files were ingested into a GPFS shared file system, and a Jupyter Notebook running an application provided by LSST Data Management was used to visualize the files. Finally, an additional test transfer from Champaign to La Serena is being conducted and has so far achieved a peak of 40 Gb/s, sufficient for the annual transfer of LSST Data Releases to Chile.

Instrumentation in the DTNs and links and Grafana software were used to provide a real-time, web display of network performance during the demonstration. This was monitored live from the NCSA booth at the Supercomputing 2018 Conference. A number of conference attendees witnessed the demonstration and presentation, and participated in a question and answer session.

According to LSST NET Lead Jeff Kantor, “This demonstration shows not only that we have continuity and performance from the network point of view, but also that all of the partners acted as a very well-coordinated engineering team for LSST.”

Congratulations to the LSST NET SC18 Demonstration Team:

Albert Astudillo (REUNA)
Jeronimo Bezerra (FIU/AmLight)
Julio Ibarra (FIU/AmLight)
Sandra Jaque (REUNA)
Matt Kollross (UIUC/NCSA)
Ron Lambert (LSST/AURA)
Sean McManus (NOAO)
Wil O’Mullane (LSST/AURA)
Rodrigo Pescador (RNP)
Andres Villalobos (LSST/AURA)
Adil Zahir (FIU/AmLight)

Additional support was provided by a number of other people within these organizations. We are particularly grateful to SCInet, Starlight, and Fermilab for enabling this demonstration.

Original LSST article published here

fiber

Lighting up the LSST Fiber Optic Network: From Summit to Base to Archive

 

Ron Lambert LSSTApril 11, 2018 – The LSST Network Engineering Team is pleased to announce the first successful transfer of digital data over LSST/AURA 100 gigabit per second fiber optic networks from the Summit Site on Cerro Pachón, Chile to the Base Site in La Serena, Chile and on to the Archive Site at NCSA in Champaign, IL. This event took place in December 2017 and demonstrated not only performance and continuity across all hardware segments of the network, but a well-coordinated effort by multiple international engineering teams in support of LSST.

This challenge, driven by the “astronomical” needs of LSST to transport data from Cerro Pachón to NCSA for processing, and to distribute data from Cerro Pachón to the rest of Chile and the world, was the motivation for this international collaboration. AURA coordinated the project in conjunction with REUNA in Chile, and in the United States with FIU/AmLightand NCSA. At the international level, the joint work with FIU/AmLight demonstrated transcontinental infrastructures that ensure the data flow to the United States is coherent and reliable.

This scientific and technological milestone marks the first stage of a project at the Chilean national level, which is part of REUNA’s 2018-2021 Strategic Plan that will provide a platform for the collaborative development of science and education, suitable for transmission and analysis of real time data obtained from the Universe. It will have an impact on multiple research areas, such as computer science, mathematics, physics, and others, enabling them in the new era of Big Data Science.

In Chile, the 800 km network between Cerro Pachón, La Serena, and Santiago has an initial capacity of 10Tbps (96 optical channels of 100Gbps each) with an “unlimited” potential growth bound to technology development. For example, there are already prototypes of 400 Gbps channels, which would quadruple the capacity.

In the first data transfer carried on this new network, a set of 6 x 10 Gbps network interface cards in data transfer nodes (DTN) configured with iPerf3 software generated a sustained data rate of approximately 48 gigabits per second, during a 24-hour period. This exceeded the test objective of 40 gigabits per second.

It should be noted that this project began on December 5th, 2014, with the public announcement by AURA, REUNA, and Telefónica of the implementation of the digital infrastructure between Santiago and Cerro Pachón. The Chilean Ministry of Economy supported the announcement.

Currently, professionals from AURA (Chile and the USA), REUNA (Chile), Florida International University (USA), AmLight (USA), RNP (Brazil), and UI NCSA (USA) participate in the LSST Network Engineering Team (NET), which provides the means to engineer end-to-end network performance across multiple network domains and providers.

It is also important to mention the role played by private companies in the development of these infrastructures.  In the case of Chile, Telefónica has been a strategic partner with a vision of collaboration with the National Academic Network in the technological development of the country. In the case of the USA, Internet2 and Florida LambdaRail have been long-term collaborators, supporting FIU/AmLight and the astronomy community in Chile.

Original article published here: https://project.lsst.org/lighting-lsst-fiber-optic-network-summit-base-archive