R

Auto Added by WPeMatico

sparklyr 1.4: Weighted Sampling, Tidyr Verbs, Robust Scaler, RAPIDS, and more

Sparklyr 1.4 is now available! This release comes with delightful new features such as weighted sampling and tidyr verbs support for Spark dataframes, robust scaler for standardizing data based on median and interquartile range, spark_connect interface for RAPIDS GPU acceleration plugin, as well as a number of dplyr-related improvements.

sparklyr 1.4: Weighted Sampling, Tidyr Verbs, Robust Scaler, RAPIDS, and more Read More »

torch time series, take three: Sequence-to-sequence prediction

In our overview of techniques for time-series forecasting, we move on to sequence-to-sequence models. Architectures in this family are commonly used in natural language processing (NLP) tasks, such as machine translation. With NLP, however, significant pre-processing is required before proceeding to model definition and training. In staying with our familiar numerical series, we can fully

torch time series, take three: Sequence-to-sequence prediction Read More »

sparklyr 1.2: Foreach, Spark 3.0 and Databricks Connect

A new sparklyr release is now available. This sparklyr 1.2 release features new functionalities such as support for Databricks Connect, a Spark backend for the ‘foreach’ package, inter-op improvements for working with Spark 3.0 preview, as well as a number of bug fixes and improvements addressing user-visible pain points.

sparklyr 1.2: Foreach, Spark 3.0 and Databricks Connect Read More »

Towards privacy: Encrypted deep learning with Syft and Keras

Deep learning need not be irreconcilable with privacy protection. Federated learning enables on-device, distributed model training; encryption keeps model and gradient updates private; differential privacy prevents the training data from leaking. As of today, private and secure deep learning is an emerging technology. In this post, we introduce Syft, an open-source framework that integrates with

Towards privacy: Encrypted deep learning with Syft and Keras Read More »

Hacking deep learning: model inversion attack by example

Compared to other applications, deep learning models might not seem too likely as victims of privacy attacks. However, methods exist to determine whether an entity was used in the training set (an adversarial attack called member inference), and techniques subsumed under “model inversion” allow to reconstruct raw data input given just model output (and sometimes,

Hacking deep learning: model inversion attack by example Read More »