For some time now, you can also only rely on the data.table
package and its IDate
class plus associated functions. (Check ?as.IDate()
). So, no need to additionally install lubridate
.
require(data.table)
some_date <- c("01/02/1979", "03/04/1980")
month(as.IDate(some_date, '%d/%m/%Y')) # all data.table functions
Add the active: false
option (documentation)..
$("#accordion").accordion({ header: "h3", collapsible: true, active: false });
All of these answers seem to assume that the user is generating the bad XML, rather than receiving it from gSOAP, which should know better!
To have global constants in my apps, this is what I do in a separate Swift file:
import Foundation
struct Config {
static let baseURL = "https://api.com"
static APIKeys {
static let token = "token"
static let user = "user"
}
struct Notifications {
static let awareUser = "aware_user"
}
}
It's easy to use, and to call everywhere like this:
print(Config.Notifications.awareUser)
You could simply use target="_blank"
on the form.
<form action="action.php" method="post" target="_blank">
<input type="hidden" name="something" value="some value">
</form>
Add hidden inputs in the way you prefer, and then simply submit the form with JS.
You put the declaration in a header file, e.g.
extern int my_global;
In one of your .c files you define it at global scope.
int my_global;
Every .c file that wants access to my_global
includes the header file with the extern
in.
In windows server 2012, even after installing asp.net you might run into this issue.
Check for "Http activation" feature. This feature is present under Web services as well.
Make sure you add the above and everything should be awesome for you !!!
I would set up a Subversion repository. By doing it this way, individual developers can choose whether to use Subversion clients or Git clients (with git-svn
). Using git-svn
doesn't give you all the benefits of a full Git solution, but it does give individual developers a great deal of control over their own workflow.
I believe it will be a relatively short time before Git works just as well on Windows as it does on Unix and Mac OS X (since you asked).
Subversion has excellent tools for Windows, such as TortoiseSVN for Explorer integration and AnkhSVN for Visual Studio integration.
Double check that the foreign keys have exactly the same type as the field you've got in this table. For example, both should be Integer(10), or Varchar (8), even the number of characters.
int i= Array.IndexOf(temp1, temp1.Where(x=>x.Contains("abc")).FirstOrDefault());
Class re-declaration will be the problem. check duplicate class and build.
This answer is a summary of comments; but it really deserves its own answer.
The accepted answer (by @BjarkeCK) works, but as written, there is a maximum allowable page height of about 120 inches — roughly the height of 11 normal sized pages. So this is not a perfect solution.
However, there is a hack. You have to edit the source code of your local browser which renders the Page-Sizer settings window and either increase or delete the max
attribute for the page height input. As shown in the following screen shot.
To access the source code you need to edit, position your cursor inside the custom height field, right-click, then choose inspect element.
Note that you also have to delete all the page breaks in your original document otherwise no data will render after the first one.
UPDATE:
Modified the code based on this answer to get rid of obsolete methods.
You can use the Security namespace to check this:
public void ExportToFile(string filename)
{
var permissionSet = new PermissionSet(PermissionState.None);
var writePermission = new FileIOPermission(FileIOPermissionAccess.Write, filename);
permissionSet.AddPermission(writePermission);
if (permissionSet.IsSubsetOf(AppDomain.CurrentDomain.PermissionSet))
{
using (FileStream fstream = new FileStream(filename, FileMode.Create))
using (TextWriter writer = new StreamWriter(fstream))
{
// try catch block for write permissions
writer.WriteLine("sometext");
}
}
else
{
//perform some recovery action here
}
}
As far as getting those permission, you are going to have to ask the user to do that for you somehow. If you could programatically do this, then we would all be in trouble ;)
Ok, actually the answer is way simple: when there is a option not recognized by Angular, it includes a dull one.
What you are doing wrong is, when you use ng-options, it reads an object, say [{ id: 10, name: test }, { id: 11, name: test2 }] right?
This is what your model value needs to be to evaluate it as equal, say you want selected value to be 10, you need to set your model to a value like { id: 10, name: test }
to select 10, therefore it will NOT create that trash.
Hope it helps everybody to understand, I had a rough time trying :)
You need to write() the read() data into the new file:
ssize_t nrd;
int fd;
int fd1;
fd = open(aa[1], O_RDONLY);
fd1 = open(aa[2], O_CREAT | O_WRONLY, S_IRUSR | S_IWUSR);
while (nrd = read(fd,buffer,50)) {
write(fd1,buffer,nrd);
}
close(fd);
close(fd1);
Update: added the proper opens...
Btw, the O_CREAT can be OR'd (O_CREAT | O_WRONLY). You are actually opening too many file handles. Just do the open once.
I had a similar problem. As I got a Character from my XML child I had to convert it first to a String (or Integer, if you expect one). The following shows how I solved the problem.
foreach($xml->children() as $newInstr){
$iInstrument = new Instrument($newInstr['id'],$newInstr->Naam,$newInstr->Key);
$arrInstruments->offsetSet((String)$iInstrument->getID(), $iInstrument);
}
There are two differences:
We can use Iterator to traverse Set and List and also Map type of Objects. While a ListIterator can be used to traverse for List-type Objects, but not for Set-type of Objects.
That is, we can get a Iterator object by using Set and List, see here:
By using Iterator we can retrieve the elements from Collection Object in forward direction only.
Methods in Iterator:
hasNext()
next()
remove()
Iterator iterator = Set.iterator();
Iterator iterator = List.iterator();
But we get ListIterator object only from the List interface, see here:
where as a ListIterator allows you to traverse in either directions (Both forward and backward). So it has two more methods like hasPrevious()
and previous()
other than those of Iterator. Also, we can get indexes of the next or previous elements (using nextIndex()
and previousIndex()
respectively )
Methods in ListIterator:
ListIterator listiterator = List.listIterator();
i.e., we can't get ListIterator object from Set interface.
Reference : - What is the difference between Iterator and ListIterator ?
For any object array with header and data.jsfiddle
https://jsfiddle.net/AmrendraKumar/9ac75Lg0/2/
<table id="myTable" border='1|1'></table>
<script>
const userObjectArray = [{
name: "Ajay",
age: 27,
height: 5.10,
address: "Bangalore"
}, {
name: "Vijay",
age: 24,
height: 5.10,
address: "Bangalore"
}, {
name: "Dinesh",
age: 27,
height: 5.10,
address: "Bangalore"
}];
const headers = Object.keys(userObjectArray[0]);
var tr1 = document.createElement('tr');
var htmlHeaderStr = '';
for (let i = 0; i < headers.length; i++) {
htmlHeaderStr += "<th>" + headers[i] + "</th>"
}
tr1.innerHTML = htmlHeaderStr;
document.getElementById('myTable').appendChild(tr1);
for (var j = 0; j < userObjectArray.length; j++) {
var tr = document.createElement('tr');
var htmlDataString = '';
for (var k = 0; k < headers.length; k++) {
htmlDataString += "<td>" + userObjectArray[j][headers[k]] + "</td>"
}
tr.innerHTML = htmlDataString;
document.getElementById('myTable').appendChild(tr);
}
</script>
Your if statements are checking for int values. raw_input
returns a string. Change the following line:
tSizeAns = raw_input()
to
tSizeAns = int(raw_input())
I think the correct solution with support library 21 is the following
// action_bar is def resource of appcompat;
// if you have not provided your own toolbar I mean
Toolbar toolbar = (Toolbar) findViewById(R.id.action_bar);
if (toolbar != null) {
// change home icon if you wish
toolbar.setLogo(this.getResValues().homeIconDrawable());
toolbar.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
//catch here title and home icon click
}
});
}
Adding to @Vityata 's answer, below is the function I use to convert a row / column vector in a 1D array:
Function convertVecToArr(ByVal rng As Range) As Variant
'convert two dimension array into a one dimension array
Dim arr() As Variant, slicedArr() As Variant
arr = rng.value 'arr = rng works too (https://bettersolutions.com/excel/cells-ranges/vba-working-with-arrays.htm)
If UBound(arr, 1) > UBound(arr, 2) Then
slicedArr = Application.WorksheetFunction.Transpose(arr)
Else
slicedArr = Application.WorksheetFunction.index(arr, 1, 0) 'If you set row_num or column_num to 0 (zero), Index returns the array of values for the entire column or row, respectively._
'To use values returned as an array, enter the Index function as an array formula in a horizontal range of cells for a row,_
'and in a vertical range of cells for a column.
'https://usefulgyaan.wordpress.com/2013/06/12/vba-trick-of-the-week-slicing-an-array-without-loop-application-index/
End If
convertVecToArr = slicedArr
End Function
One way is to convert your array to an object and use it in scope (simulation of an array). This way has the benefit of maintaining the template.
$scope.telephone = {};
for (var i = 0, l = $scope.phones.length; i < l; i++) {
$scope.telephone[i.toString()] = $scope.phone[i];
}
<input type="text" ng-model="telephone[0.toString()]" />
<input type="text" ng-model="telephone[1.toString()]" />
and on save, change it back.
$scope.phones = [];
for (var i in $scope.telephone) {
$scope.phones[parseInt(i)] = $scope.telephone[i];
}
def _assertNotRaises(self, exception, obj, attr):
try:
result = getattr(obj, attr)
if hasattr(result, '__call__'):
result()
except Exception as e:
if isinstance(e, exception):
raise AssertionError('{}.{} raises {}.'.format(obj, attr, exception))
could be modified if you need to accept parameters.
call like
self._assertNotRaises(IndexError, array, 'sort')
Based on the above post i tried this and this worked fine I wanted to use the value of Map B as keys for Map A:
<c:if test="${not empty activityCodeMap and not empty activityDescMap}">
<c:forEach var="valueMap" items="${auditMap}">
<tr>
<td class="activity_white"><c:out value="${activityCodeMap[valueMap.value.activityCode]}"/></td>
<td class="activity_white"><c:out value="${activityDescMap[valueMap.value.activityDescCode]}"/></td>
<td class="activity_white">${valueMap.value.dateTime}</td>
</tr>
</c:forEach>
</c:if>
This works for me
# Convert image to bytes
import PIL.Image as Image
pil_im = Image.fromarray(image)
b = io.BytesIO()
pil_im.save(b, 'jpeg')
im_bytes = b.getvalue()
return im_bytes
Of course, never fails. Found the solution about a minute after posting the above question... solution for those that may have had the same issue:
ContextWrapper.getFilesDir()
Found here.
When you add an object to $stateProvider.state
that object is then passed with the state. So you can add additional properties which you can read later on when needed.
Example route configuration
$stateProvider
.state('public', {
abstract: true,
module: 'public'
})
.state('public.login', {
url: '/login',
module: 'public'
})
.state('tool', {
abstract: true,
module: 'private'
})
.state('tool.suggestions', {
url: '/suggestions',
module: 'private'
});
The $stateChangeStart
event gives you acces to the toState
and fromState
objects. These state objects will contain the configuration properties.
Example check for the custom module property
$rootScope.$on('$stateChangeStart', function(e, toState, toParams, fromState, fromParams) {
if (toState.module === 'private' && !$cookies.Session) {
// If logged out and transitioning to a logged in page:
e.preventDefault();
$state.go('public.login');
} else if (toState.module === 'public' && $cookies.Session) {
// If logged in and transitioning to a logged out page:
e.preventDefault();
$state.go('tool.suggestions');
};
});
I didn't change the logic of the cookies because I think that is out of scope for your question.
You can create a Helper to get you this to work more modular.
Value publicStates
myApp.value('publicStates', function(){
return {
module: 'public',
routes: [{
name: 'login',
config: {
url: '/login'
}
}]
};
});
Value privateStates
myApp.value('privateStates', function(){
return {
module: 'private',
routes: [{
name: 'suggestions',
config: {
url: '/suggestions'
}
}]
};
});
The Helper
myApp.provider('stateshelperConfig', function () {
this.config = {
// These are the properties we need to set
// $stateProvider: undefined
process: function (stateConfigs){
var module = stateConfigs.module;
$stateProvider = this.$stateProvider;
$stateProvider.state(module, {
abstract: true,
module: module
});
angular.forEach(stateConfigs, function (route){
route.config.module = module;
$stateProvider.state(module + route.name, route.config);
});
}
};
this.$get = function () {
return {
config: this.config
};
};
});
Now you can use the helper to add the state configuration to your state configuration.
myApp.config(['$stateProvider', '$urlRouterProvider',
'stateshelperConfigProvider', 'publicStates', 'privateStates',
function ($stateProvider, $urlRouterProvider, helper, publicStates, privateStates) {
helper.config.$stateProvider = $stateProvider;
helper.process(publicStates);
helper.process(privateStates);
}]);
This way you can abstract the repeated code, and come up with a more modular solution.
Note: the code above isn't tested
You should be able to do something like this:
http://maps.google.com/maps?q=24.197611,120.780512
Some more info on the query parameters available at this location
Here's another link to an SO thread
Make a bypass API in server.js. This works for me.
app.post('/by-pass-api',function(req, response){
const url = req.body.url;
console.log("calling url", url);
request.get(
url,
(error, res, body) => {
if (error) {
console.error(error)
return response.status(200).json({'content': "error"})
}
return response.status(200).json(JSON.parse(body))
},
)
})
And call it using axios or fetch like this:
const options = {
method: 'POST',
headers: {'content-type': 'application/json'},
url:`http://localhost:3000/by-pass-api`, // your environment
data: { url }, // your https request here
};
if(!empty($youtube) && empty($link)) {
}
else if(empty($youtube) && !empty($link)) {
}
else if(empty($youtube) && empty($link)) {
}
It's a placeholder for the first parameter, which in your case evaluates to "wordpad.exe".
If you had an additional parameter, you'd use {1}
, etc.
Quit (force quit) all instances of chrome. Otherwise the below command will not work.
open -a "Google Chrome" --args --allow-file-access-from-files
Executing this command in terminal will open Chrome regardless of where it is installed.
For anyone who wants to have time differences and have results that can take negative numbers here is a good one. pad(3) = "03", pad(-2) = "-02", pad(-234) = "-234"
pad = function(n){
if(n >= 0){
return n > 9 ? "" + n : "0" + n;
}else{
return n < -9 ? "" + n : "-0" + Math.abs(n);
}
}
I figured out a way that works for me. It does require the use of a scratch table that a linked server has access to though.
I created a table and populated it with the values I need then I reference that table through a linked server.
SELECT *
FROM OPENQUERY(KHSSQLODSPRD,'SELECT *
FROM ABC.dbo.CLAIM A WITH (NOLOCK)
WHERE A.DOS >= (SELECT MAX(DATE) FROM KHSDASQL01.DA_MAIN.[dbo].[ALLFILENAMES]) ')
Just to give more perspective to the answers
Spark-shell is a scala repl
You can type :help to see the list of operation that are possible inside the scala shell
scala> :help
All commands can be abbreviated, e.g., :he instead of :help.
:edit <id>|<line> edit history
:help [command] print this summary or command-specific help
:history [num] show the history (optional num is commands to show)
:h? <string> search the history
:imports [name name ...] show import history, identifying sources of names
:implicits [-v] show the implicits in scope
:javap <path|class> disassemble a file or class name
:line <id>|<line> place line(s) at the end of history
:load <path> interpret lines in a file
:paste [-raw] [path] enter paste mode or paste a file
:power enable power user mode
:quit exit the interpreter
:replay [options] reset the repl and replay all previous commands
:require <path> add a jar to the classpath
:reset [options] reset the repl to its initial state, forgetting all session entries
:save <path> save replayable session to a file
:sh <command line> run a shell command (result is implicitly => List[String])
:settings <options> update compiler options, if possible; see reset
:silent disable/enable automatic printing of results
:type [-v] <expr> display the type of an expression without evaluating it
:kind [-v] <expr> display the kind of expression's type
:warnings show the suppressed warnings from the most recent line which had any
:load interpret lines in a file
Effective till now(2020).
pip install cmake
conda install -c conda-forge dlib
This issue can occur if the Azure Active Directory Module for Windows PowerShell isn't loaded correctly.
To resolve this issue, follow these steps.
1.Install the Azure Active Directory Module for Windows PowerShell on the computer (if it isn't already installed). To install the Azure Active Directory Module for Windows PowerShell, go to the following Microsoft website:
Manage Azure AD using Windows PowerShell
2.If the MSOnline module isn't present, use Windows PowerShell to import the MSOnline module.
Import-Module MSOnline
After it complete, we can use this command to check it.
PS C:\Users> Get-Module -ListAvailable -Name MSOnline*
Directory: C:\windows\system32\WindowsPowerShell\v1.0\Modules
ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Manifest 1.1.166.0 MSOnline {Get-MsolDevice, Remove-MsolDevice, Enable-MsolDevice, Disable-MsolDevice...}
Manifest 1.1.166.0 MSOnlineExtended {Get-MsolDevice, Remove-MsolDevice, Enable-MsolDevice, Disable-MsolDevice...}
More information about this issue, please refer to it.
Update:
We should import azure AD powershell to VS 2015, we can add tool and select Azure AD powershell.
you can use Simple Contact Form in HTML with PHP mailer. It's easy to implement in you website. You can try the demo from following link: Simple Contact/Feedback Form in HTML-PHP mailer
Otherwise you can watch the demo video in following link: Youtube: Simple Contact/Feedback Form in HTML-PHP mailer
When you are running in localhost, you may get following error:
You can check in this link for more detailed information: Simple Contact/Feedback Form in HTML with php (HTML-PHP mailer) And this is the screenshot of HTML form:
And this is the main PHP coding:
<?php
if($_POST["submit"]) {
$recipient="[email protected]"; //Enter your mail address
$subject="Contact from Website"; //Subject
$sender=$_POST["name"];
$senderEmail=$_POST["email"];
$message=$_POST["comments"];
$mailBody="Name: $sender\nEmail Address: $senderEmail\n\nMessage: $message";
mail($recipient, $subject, $mailBody);
sleep(1);
header("Location:http://blog.antonyraphel.in/sample/"); // Set here redirect page or destination page
}
?>
You can use Newtonsoft.Json
, it's a dependency of Microsoft.AspNet.Mvc.ModelBinding
which is a dependency of Microsoft.AspNet.Mvc
. So, you don't need to add a dependency in your project.json.
#using Newtonsoft.Json
....
JsonConvert.DeserializeObject(json);
Note, using a WebAPI controller you don't need to deal with JSON.
Json.NET has been removed from the ASP.NET Core 3.0 shared framework.
You can use the new JSON serializer layers on top of the high-performance Utf8JsonReader
and Utf8JsonWriter
. It deserializes objects from JSON and serializes objects to JSON. Memory allocations are kept minimal and includes support for reading and writing JSON with Stream asynchronously.
To get started, use the JsonSerializer
class in the System.Text.Json.Serialization
namespace. See the documentation for information and samples.
To use Json.NET in an ASP.NET Core 3.0 project:
services.AddMvc()
.AddNewtonsoftJson();
Read Json.NET support in Migrate from ASP.NET Core 2.2 to 3.0 Preview 2 for more information.
<html>
<script type="text/javascript">
var myJSONObject = {"bindings": 11};
alert(myJSONObject);
var stringJson =JSON.stringify(myJSONObject);
alert(stringJson);
</script>
</html>
You should define source code encoding, add this to the top of your script:
# -*- coding: utf-8 -*-
The reason why it works differently in console and in the IDE is, likely, because of different default encodings set. You can check it by running:
import sys
print sys.getdefaultencoding()
Also see:
You can use sort of Maybe monad (though I'd prefer Jay's answer)
public class Maybe<T>
{
private readonly T _value;
public Maybe(T value)
{
_value = value;
IsNothing = false;
}
public Maybe()
{
IsNothing = true;
}
public bool IsNothing { get; private set; }
public T Value
{
get
{
if (IsNothing)
{
throw new InvalidOperationException("Value doesn't exist");
}
return _value;
}
}
public override bool Equals(object other)
{
if (IsNothing)
{
return (other == null);
}
if (other == null)
{
return false;
}
return _value.Equals(other);
}
public override int GetHashCode()
{
if (IsNothing)
{
return 0;
}
return _value.GetHashCode();
}
public override string ToString()
{
if (IsNothing)
{
return "";
}
return _value.ToString();
}
public static implicit operator Maybe<T>(T value)
{
return new Maybe<T>(value);
}
public static explicit operator T(Maybe<T> value)
{
return value.Value;
}
}
Your method would look like:
public static Maybe<T> GetQueryString<T>(string key) where T : IConvertible
{
if (String.IsNullOrEmpty(HttpContext.Current.Request.QueryString[key]) == false)
{
string value = HttpContext.Current.Request.QueryString[key];
try
{
return (T)Convert.ChangeType(value, typeof(T));
}
catch
{
//Could not convert. Pass back default value...
return new Maybe<T>();
}
}
return new Maybe<T>();
}
in sys too:
import sys
# its win32, maybe there is win64 too?
is_windows = sys.platform.startswith('win')
You can just pass a Date
object:
For current date:
$('#calendar').fullCalendar({
defaultDate: new Date()
});
For specific date '2016-05-20':
$('#calendar').fullCalendar({
defaultDate: new Date(2016, 4, 20)
});
There is probably another table with a foreign key referencing the primary key you are trying to change.
To find out which table caused the error you can run SHOW ENGINE INNODB
STATUS
and then look at the LATEST FOREIGN KEY ERROR
section
Use SHOW CREATE TABLE categories to show the name of constraint.
Most probably it will be categories_ibfk_1
Use the name to drop the foreign key first and the column then:
ALTER TABLE categories DROP FOREIGN KEY categories_ibfk_1;
ALTER TABLE categories DROP COLUMN assets_id;
The first one creates a single lambda function and calls it ten times.
The second one doesn't call the function. It creates 10 different lambda functions. It puts all of those in a list. To make it equivalent to the first you need:
[(lambda x: x*x)(x) for x in range(10)]
Or better yet:
[x*x for x in range(10)]
```{r results='hide', message=FALSE, warning=FALSE}
library(RJSONIO)
library(AnotherPackage)
```
see Chunk Options in the Knitr docs
What I do in this scenario is create a table variable to hold the Ids.
Declare @Ids Table (id integer primary Key not null)
Insert @Ids(id) values (4),(7),(12),(22),(19)
-- (or call another table valued function to generate this table)
Then loop based on the rows in this table
Declare @Id Integer
While exists (Select * From @Ids)
Begin
Select @Id = Min(id) from @Ids
exec p_MyInnerProcedure @Id
Delete from @Ids Where id = @Id
End
or...
Declare @Id Integer = 0 -- assuming all Ids are > 0
While exists (Select * From @Ids
where id > @Id)
Begin
Select @Id = Min(id)
from @Ids Where id > @Id
exec p_MyInnerProcedure @Id
End
Either of above approaches is much faster than a cursor (declared against regular User Table(s)). Table-valued variables have a bad rep because when used improperly, (for very wide tables with large number of rows) they are not performant. But if you are using them only to hold a key value or a 4 byte integer, with a index (as in this case) they are extremely fast.
You have used '/0'
instead of '\0'
. This is incorrect: the '\0'
is a null character, while '/0'
is a multicharacter literal.
Moreover, in C it is OK to skip a zero in your condition:
while (*(forward++)) {
...
}
is a valid way to check character, integer, pointer, etc. for being zero.
This is an old question, but I found that when you create a string like this:
<string name="newline_test">My
New line test</string>
The output in your app will be like this (no newline)
My New line test
When you put the string in quotation marks
<string name="newline_test">"My
New line test"</string>
the newline will appear:
My
New line test
You can solve this problem using this code:
if(!empty($_GET['variable from which you get']))
{
$_SESSION['something']= $_GET['variable from which you get'];
}
So you get the variable from a GET form, you will store in the $_SESSION['whatever']
variable just once when $_GET['variable from which you get']
is set and if it is empty $_SESSION['something']
will store the old parameter
If your method doesn't have to return html and has to do something else then you can use a lambda instead of helper method in Razor
@{
ViewBag.Title = "Index";
Layout = "~/Views/Shared/_Layout.cshtml";
Func<int,int,int> Sum = (a, b) => a + b;
}
<h2>Index</h2>
@Sum(3,4)
Create WEB-INF folder in src/webapp, and include web.xml page inside the WEB-INF folder then
A random value?
If you want a random value, try
<?php
$value = mt_rand($min, $max);
mt_rand() will run a bit more random if you are using many random numbers in a row, or if you might ever execute the script more than once a second. In general, you should use mt_rand() over rand() if there is any doubt.
It has to be a constant - the value has to be computable at the time that the procedure is created, and that one computation has to provide the value that will always be used.
Look at the definition of sys.all_parameters
:
default_value
sql_variant
Ifhas_default_value
is 1, the value of this column is the value of the default for the parameter; otherwise,NULL
.
That is, whatever the default for a parameter is, it has to fit in that column.
As Alex K pointed out in the comments, you can just do:
CREATE PROCEDURE [dbo].[problemParam]
@StartDate INT = NULL,
@EndDate INT = NULL
AS
BEGIN
SET @StartDate = COALESCE(@StartDate,CONVERT(INT,(CONVERT(CHAR(8),GETDATE()-130,112))))
provided that NULL
isn't intended to be a valid value for @StartDate
.
As to the blog post you linked to in the comments - that's talking about a very specific context - that, the result of evaluating GETDATE()
within the context of a single query is often considered to be constant. I don't know of many people (unlike the blog author) who would consider a separate expression inside a UDF to be part of the same query as the query that calls the UDF.
In one line we can set image with this code
[buttonName setBackgroundImage:[UIImage imageNamed:@"imageName"] forState:UIControlStateNormal];
I realize this question is a bit dated and since it shows up on Google search for similar issue I thought I will expand a little bit more on top of @CowWarrior's answer. I was looking for somewhat similar solution, and after scouring through countless SO question/answers and Bootstrap documentations the solution was pretty simple. Again, this would be using inbuilt Bootstrap collapse
class to show/hide divs and Bootstrap's "Collapse Event".
What I realized is that it is easy to do it using a Bootstrap Accordion, but most of the time even though the functionality required is "somewhat" similar to an Accordion, it's different in a way that one would want to show hide <div>
based on, lets say, menu buttons on a navbar
. Below is a simple solution to this. The anchor tags (<a>
) could be navbar items and based on a collapse event the corresponding div will replace the existing div. It looks slightly sloppy in CodeSnippet, but it is pretty close to achieving the functionality-
All that the JavaScript does is makes all the other <div>
hide using
$(".main-container.collapse").not($(this)).collapse('hide');
when the loaded <div>
is displayed by checking the Collapse event shown.bs.collapse
. Here's the Bootstrap documentation on Collapse Event.
Note: main-container
is just a custom class.
Here it goes-
$(".main-container.collapse").on('shown.bs.collapse', function () { _x000D_
//when a collapsed div is shown hide all other collapsible divs that are visible_x000D_
$(".main-container.collapse").not($(this)).collapse('hide');_x000D_
});
_x000D_
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>_x000D_
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>_x000D_
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/>_x000D_
_x000D_
<a href="#Foo" class="btn btn-default" data-toggle="collapse">Toggle Foo</a>_x000D_
<a href="#Bar" class="btn btn-default" data-toggle="collapse">Toggle Bar</a>_x000D_
_x000D_
<div id="Bar" class="main-container collapse in">_x000D_
This div (#Bar) is shown by default and can toggle_x000D_
</div>_x000D_
<div id="Foo" class="main-container collapse">_x000D_
This div (#Foo) is hidden by default_x000D_
</div>
_x000D_
Silverlight applications do not have direct access to machine.config.
If you have another list that contains all the items you would like to add you can do arList.addAll(otherList)
. Alternatively, if you will always add the same elements to the list you could create a new list that is initialized to contain all your values and use the addAll()
method, with something like
Integer[] otherList = new Integer[] {1, 2, 3, 4, 5};
arList.addAll(Arrays.asList(otherList));
or, if you don't want to create that unnecessary array:
arList.addAll(Arrays.asList(1, 2, 3, 4, 5));
Otherwise you will have to have some sort of loop that adds the values to the list individually.
Here is a solution with JQuery
:
$(document).ready(function() {
var $abc = $("#abc");
$abc.css("height", $abc.attr("scrollHeight"));
})
abc
is a teaxtarea
.
Here is my answer from a different question.
First you need to reference the Adobe Reader ActiveX Control
Adobe Acrobat Browser Control Type Library 1.0
%programfiles&\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll
Then you just drag it into your Windows Form from the Toolbox.
And use some code like this to initialize the ActiveX Control.
private void InitializeAdobe(string filePath)
{
try
{
this.axAcroPDF1.LoadFile(filePath);
this.axAcroPDF1.src = filePath;
this.axAcroPDF1.setShowToolbar(false);
this.axAcroPDF1.setView("FitH");
this.axAcroPDF1.setLayoutMode("SinglePage");
this.axAcroPDF1.Show();
}
catch (Exception ex)
{
throw;
}
}
Make sure when your Form closes that you dispose of the ActiveX Control
this.axAcroPDF1.Dispose();
this.axAcroPDF1 = null;
otherwise Acrobat might be left lying around.
In order to set the color of highlighted item you need to set the color of cell.SelectionStyle
in iOS.
This example is to set the color of tapped item to transparent.
If you want you can change it with other colors from UITableViewCellSelectionStyle
. This is to be written in the platform project of iOS by creating a new Custom ListView renderer in your Forms project.
public class CustomListViewRenderer : ListViewRenderer
{
protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e)
{
base.OnElementPropertyChanged(sender, e);
if (Control == null)
{
return;
}
if (e.PropertyName == "ItemsSource")
{
foreach (var cell in Control.VisibleCells)
{
cell.SelectionStyle = UITableViewCellSelectionStyle.None;
}
}
}
}
For android you can add this style in your values/styles.xml
<style name="ListViewStyle.Light" parent="android:style/Widget.ListView">
<item name="android:listSelector">@android:color/transparent</item>
<item name="android:cacheColorHint">@android:color/transparent</item>
</style>
I'm using autolayout and none of the answers worked for me. Here is my solution that finally worked:
@property (nonatomic, assign) BOOL shouldScrollToLastRow;
- (void)viewDidLoad {
[super viewDidLoad];
_shouldScrollToLastRow = YES;
}
- (void)viewDidLayoutSubviews {
[super viewDidLayoutSubviews];
// Scroll table view to the last row
if (_shouldScrollToLastRow)
{
_shouldScrollToLastRow = NO;
[self.tableView setContentOffset:CGPointMake(0, CGFLOAT_MAX)];
}
}
It is allow as TD can contain inline- AND block-elements.
Here you can find it in the reference: http://xhtml.com/en/xhtml/reference/td/#td-contains
This should return the text value of the selected value
var vSkill = document.getElementById('newSkill');
var vSkillText = vSkill.options[vSkill.selectedIndex].innerHTML;
alert(vSkillText);
Props: @Tanerax for reading the question, knowing what was asked and answering it before others figured it out.
Edit: DownModed, cause I actually read a question fully, and answered it, sad world it is.
you can get the product name like this
foreach ( $cart_object->cart_contents as $value ) {
$_product = apply_filters( 'woocommerce_cart_item_product', $value['data'] );
if ( ! $_product->is_visible() ) {
echo $_product->get_title();
} else {
echo $_product->get_title();
}
}
Thanks for Ben Koehler's solution.
However, I had a problem with multiple instances of datepickers, with some of them needed with day selection. Ben Koehler's solution (in edit 3) works, but hides the day selection in all instances. Here's an update that solves this issue :
$('.date-picker').datepicker({
dateFormat: "mm/yy",
changeMonth: true,
changeYear: true,
showButtonPanel: true,
onClose: function(dateText, inst) {
if($('#ui-datepicker-div').html().indexOf('ui-datepicker-close ui-state-default ui-priority-primary ui-corner-all ui-state-hover') > -1) {
$(this).datepicker(
'setDate',
new Date(
$("#ui-datepicker-div .ui-datepicker-year :selected").val(),
$("#ui-datepicker-div .ui-datepicker-month :selected").val(),
1
)
).trigger('change');
$('.date-picker').focusout();
}
$("#ui-datepicker-div").removeClass("month_year_datepicker");
},
beforeShow : function(input, inst) {
if((datestr = $(this).val()).length > 0) {
year = datestr.substring(datestr.length-4, datestr.length);
month = datestr.substring(0, 2);
$(this).datepicker('option', 'defaultDate', new Date(year, month-1, 1));
$(this).datepicker('setDate', new Date(year, month-1, 1));
$("#ui-datepicker-div").addClass("month_year_datepicker");
}
}
});
As mentioned, Java isn't able to delete a folder that contains files, so first delete the files and then the folder.
Here's a simple example to do this:
import org.apache.commons.io.FileUtils;
// First, remove files from into the folder
FileUtils.cleanDirectory(folder/path);
// Then, remove the folder
FileUtils.deleteDirectory(folder/path);
Or:
FileUtils.forceDelete(new File(destination));
If you have several dialogs that could be opened on a page, this will allow any of them to be closed by clicking on the background:
$('body').on('click','.ui-widget-overlay', function() {
$('.ui-dialog').filter(function () {
return $(this).css("display") === "block";
}).find('.ui-dialog-content').dialog('close');
});
(Only works for modal dialogs, as it relies on '.ui-widget-overlay'. And it does close all open dialogs any time the background of one of them is clicked.)
In Visual Studio 2015 (Soulution is under source control, MVC-Project), csano's Update-Package -Reinstall -ProjectName Your.Project.Name
worked, but it messed up with some write locks.
I had to delete the "packages"-Folder manually before. (It seemed to be locked because of the source control).
Also, I had to re-install the MVC-Package from the NuGet Package Manager.
I would recommend using mysqldump and from php use the system command as suggested in the article you found.
Well, std::string
is a class, const char *
is a pointer. Those are two different things. It's easy to get from string
to a pointer (since it typically contains one that it can just return), but for the other way, you need to create an object of type std::string
.
My recommendation: Functions that take constant strings and don't modify them should always take const char *
as an argument. That way, they will always work - with string literals as well as with std::string
(via an implicit c_str()
).
Well, conda install tensorflow
worked perfect for me!
3306 is default port for mysql. Check it with:
netstat -nl|grep 3306
it should give this result:
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN
The answers of others do not give you the exact number!
This function calculates the desired number accurately and returns it in the string to prevent it from being changed by javascript!
If you need a numerical result, just multiply the result of the function in number one!
function toNonExponential(value) {
// if value is not a number try to convert it to number
if (typeof value !== "number") {
value = parseFloat(value);
// after convert, if value is not a number return empty string
if (isNaN(value)) {
return "";
}
}
var sign;
var e;
// if value is negative, save "-" in sign variable and calculate the absolute value
if (value < 0) {
sign = "-";
value = Math.abs(value);
}
else {
sign = "";
}
// if value is between 0 and 1
if (value < 1.0) {
// get e value
e = parseInt(value.toString().split('e-')[1]);
// if value is exponential convert it to non exponential
if (e) {
value *= Math.pow(10, e - 1);
value = '0.' + (new Array(e)).join('0') + value.toString().substring(2);
}
}
else {
// get e value
e = parseInt(value.toString().split('e+')[1]);
// if value is exponential convert it to non exponential
if (e) {
value /= Math.pow(10, e);
value += (new Array(e + 1)).join('0');
}
}
// if value has negative sign, add to it
return sign + value;
}
In our case, deletion was not possible due to already having an app that we were in pre-release. The fix was not to delete but rather to edit each section, including version number, that needed to change for the new candidate.
You need to install JAVA SDK and give the path upto bin directory which contains the java.exe file.
example - c:/programfiles/java/jdk/bin
You can create a temp table variable and insert the data into it, then insert the data into your actual table by selecting it from the temp table.
declare @TableVar table
(
firstCol varchar(50) NOT NULL,
secondCol varchar(50) NOT NULL
)
BULK INSERT @TableVar FROM 'PathToCSVFile' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n')
GO
INSERT INTO dbo.ExistingTable
(
firstCol,
secondCol
)
SELECT firstCol,
secondCol
FROM @TableVar
GO
Use this code to open the submenu on mousehover (desktop only):
$('ul.nav li.dropdown').hover(function () {
if ($(window).width() > 767) {
$(this).find('.dropdown-menu').show();
}
}, function () {
if ($(window).width() > 767) {
$(this).find('.dropdown-menu').hide().css('display','');
}
});
And if you want the first level menu to be clickable, even on mobile add this:
$('.dropdown-toggle').click(function() {
if ($(this).next('.dropdown-menu').is(':visible')) {
window.location = $(this).attr('href');
}
});
The submenu (dropdown-menu) will be opened with mousehover on desktop, and with click/touch on mobile and tablet.
Once the submenu was open, a second click will let you open the link.
Thanks to the if ($(window).width() > 767)
, the submenu will take the full screen width on mobile.
You do not specify your environment and version of Javascript (ECMAscript), and I realise this post was from 2009, but just for completeness, with the release of ECMA2018 we can now use the s
flag to cause .
to match '\n', see https://stackoverflow.com/a/36006948/141801
Thus:
let s = 'I am a string\nover several\nlines.';
console.log('String: "' + s + '".');
let r = /string.*several.*lines/s; // Note 's' modifier
console.log('Match? ' + r.test(s); // 'test' returns true
This is a recent addition and will not work in many current environments, for example Node v8.7.0 does not seem to recognise it, but it works in Chromium, and I'm using it in a Typescript test I'm writing and presumably it will become more mainstream as time goes by.
if(list.ElementAtOrDefault(2) != null)
{
// logic
}
ElementAtOrDefault() is part of the System.Linq
namespace.
Although you have a List, so you can use list.Count > 2
.
To resolving this problem.I just create a new folder and put some new files.Then use these commond.
* git add .
* git commit
* git remote add master `your address`
then it tells me to login in. To input your username and password. after that
git pull
git push origin master
finished you have pushed your code to your github
I couldn't get the above code to work.
Google does a great explanation though here: http://code.google.com/apis/maps/documentation/javascript/basics.html#DetectingUserLocation
Where they first use the W3C Geolocation method and then offer the Google.gears fallback method for older browsers.
The example is here:
http://code.google.com/apis/maps/documentation/javascript/examples/map-geolocation.html
You would use an expression when you want to treat your function as data and not as code. You can do this if you want to manipulate the code (as data). Most of the time if you don't see a need for expressions then you probably don't need to use one.
You need to add a dot, which means to use the Dockerfile in the local directory.
For example:
docker build -t mytag .
It means you use the Dockerfile in the local directory, and if you use docker 1.5 you can specify a Dockerfile elsewhere. Extract from the help output from docker build:
-f, --file="" Name of the Dockerfile(Default is 'Dockerfile' at context root)
next
- it's like return
, but for blocks! (So you can use this in any proc
/lambda
too.)
That means you can also say next n
to "return" n
from the block. For instance:
puts [1, 2, 3].map do |e|
next 42 if e == 2
e
end.inject(&:+)
This will yield 46
.
Note that return
always returns from the closest def
, and never a block; if there's no surrounding def
, return
ing is an error.
Using return
from within a block intentionally can be confusing. For instance:
def my_fun
[1, 2, 3].map do |e|
return "Hello." if e == 2
e
end
end
my_fun
will result in "Hello."
, not [1, "Hello.", 2]
, because the return
keyword pertains to the outer def
, not the inner block.
For everyone coming to this thread with fractional seconds in your timestamp use:
to_timestamp('2018-11-03 12:35:20.419000', 'YYYY-MM-DD HH24:MI:SS.FF')
Implementation with Guzzle library:
use GuzzleHttp\Client;
use GuzzleHttp\RequestOptions;
$httpClient = new Client();
$response = $httpClient->post(
'https://postman-echo.com/post',
[
RequestOptions::BODY => 'POST raw request content',
RequestOptions::HEADERS => [
'Content-Type' => 'application/x-www-form-urlencoded',
],
]
);
echo(
$response->getBody()->getContents()
);
PHP CURL extension:
$curlHandler = curl_init();
curl_setopt_array($curlHandler, [
CURLOPT_URL => 'https://postman-echo.com/post',
CURLOPT_RETURNTRANSFER => true,
/**
* Specify POST method
*/
CURLOPT_POST => true,
/**
* Specify request content
*/
CURLOPT_POSTFIELDS => 'POST raw request content',
]);
$response = curl_exec($curlHandler);
curl_close($curlHandler);
echo($response);
If you are working in a restricted workplaces you probably will encounter this problem
A combination of a few things worked for me Basically change https to http
From https:
repositories {
jcenter()
}
To :
repositories {
maven { url "http://jcenter.bintray.com" }
}
and in gradle-wrapper.properties
..
From :
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
To :
distributionUrl=http\://services.gradle.org/distributions/gradle-3.3-all.zip
And then
- (optional) File -> Invalidate Caches / Restart`
- Give a clean build.
To verify : Check your Gradle console. It should start downloading libs from jcenter via HTTP.
Use the wait
in a loop, for waiting for terminate all processes:
function anywait()
{
for pid in "$@"
do
wait $pid
echo "Process $pid terminated"
done
echo 'All processes terminated'
}
This function will exits immediately, when all processes was terminated. This is the most efficient solution.
Use the kill -0
in a loop, for waiting for terminate all processes + do anything between checks:
function anywait_w_status()
{
for pid in "$@"
do
while kill -0 "$pid"
do
echo "Process $pid still running..."
sleep 1
done
done
echo 'All processes terminated'
}
The reaction time decreased to sleep
time, because have to prevent high CPU usage.
A realistic usage:
Waiting for terminate all processes + inform user about all running PIDs.
function anywait_w_status2()
{
while true
do
alive_pids=()
for pid in "$@"
do
kill -0 "$pid" 2>/dev/null \
&& alive_pids+="$pid "
done
if [ ${#alive_pids[@]} -eq 0 ]
then
break
fi
echo "Process(es) still running... ${alive_pids[@]}"
sleep 1
done
echo 'All processes terminated'
}
These functions getting PIDs via arguments by $@
as BASH array.
I have the same problem and I've found it is in domain name of the email address which is somehow changed from .
to _
like: name@domain_com
instead [email protected]
the easiest way would be
which means you could just do:
new File(filename).text
if(!System.IO.Directory.Exists(@"c:\mp_upload"))
{
System.IO.Directory.CreateDirectory(@"c:\mp_upload");
}
I sort of agree with leander on this one.
call:
new calc_stanica().execute(stringList.toArray(new String[stringList.size()]));
task:
public class calc_stanica extends AsyncTask<String, Void, ArrayList<String>> {
@Override
protected ArrayList<String> doInBackground(String... args) {
...
}
@Override
protected void onPostExecute(ArrayList<String> result) {
... //do something with the result list here
}
}
Or you could just make the result list a class parameter and replace the ArrayList with a boolean (success/failure);
public class calc_stanica extends AsyncTask<String, Void, Boolean> {
private List<String> resultList;
@Override
protected boolean doInBackground(String... args) {
...
}
@Override
protected void onPostExecute(boolean success) {
... //if successfull, do something with the result list here
}
}
i think by putting autocomplete="off" does not help at all
i have alternative solution,
<input type="text" name="preventAutoPass" id="preventAutoPass" style="display:none" />
add this before your password input.
eg:<input type="text" name="txtUserName" id="txtUserName" />
<input type="text" name="preventAutoPass" id="preventAutoPass" style="display:none" />
<input type="password" name="txtPass" id="txtPass" autocomplete="off" />
this does not prevent browser ask and save the password. but it prevent the password to be filled in.
cheer
package myguo;
import javax.swing.*;
public class MyGuo {
JFrame f;
JButton bt1 , bt2 ;
JTextField t1,t2;
JLabel l1,l2;
MyGuo(){
f=new JFrame("LOG IN FORM");
f.setLocation(500,300);
f.setSize(600,500);
f.setLayout(null);
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
l1=new JLabel("NAME");
l1.setBounds(50,70,80,30);
l2=new JLabel("PASSWORD");
l2.setBounds(50,100,80,30);
t1=new JTextField();
t1.setBounds(140, 70, 200,30);
t2=new JTextField();
t2.setBounds(140, 110, 200,30);
bt1 =new JButton("LOG IN");
bt1.setBounds(150,150,80,30);
bt2 =new JButton("CLEAR");
bt2.setBounds(235,150,80,30);
f.add(l1);
f.add(l2);
f.add(t1);
f.add(t2);
f.add(bt1);
f.add(bt2);
f.setVisible(true);
}
public static void main(String[] args) {
MyGuo myGuo = new MyGuo();
}
}
Yes, you can use the CSS feature named @font-face. It has only been officially approved in CSS3, but been proposed and implemented in CSS2 and has been supported in IE for quite a long time.
You declare it in the CSS like this:
@font-face { font-family: Delicious; src: url('Delicious-Roman.otf'); }
@font-face { font-family: Delicious; font-weight: bold; src: url('Delicious-Bold.otf');}
Then, you can just reference it like the other standard fonts:
h3 { font-family: Delicious, sans-serif; }
So, in this case,
<html>
<head>
<style>
@font-face { font-family: JuneBug; src: url('JUNEBUG.TTF'); }
h1 {
font-family: JuneBug
}
</style>
</head>
<body>
<h1>Hey, June</h1>
</body>
</html>
And you just need to put the JUNEBUG.TFF in the same location as the html file.
I downloaded the font from the dafont.com website:
You can do this using cross apply
SELECT c.BalanceDue AS BalanceDue
FROM Invoices
cross apply (select (InvoiceTotal - PaymentTotal - CreditTotal) as BalanceDue) as c
WHERE c.BalanceDue > 0;
SELECT DATABASEPROPERTYEX('DBName', 'Collation') SQLCollation;
Where DBName is your database name.
Use
It is the entity used to represent a non-breaking space. It is essentially a standard space, the primary difference being that a browser should not break (or wrap) a line of text at the point that this occupies.
var a = 'something' + '         ' + 'something'
A common character entity used in HTML is the non-breaking space ( ).
Remember that browsers will always truncate spaces in HTML pages. If you write 10 spaces in your text, the browser will remove 9 of them. To add real spaces to your text, you can use the character entity.
http://www.w3schools.com/html/html_entities.asp
Demo
var a = 'something' + '         ' + 'something';_x000D_
_x000D_
document.body.innerHTML = a;
_x000D_
The 8086 has a large family of instructions that accept a register operand and an effective address, perform some computations to compute the offset part of that effective address, and perform some operation involving the register and the memory referred to by the computed address. It was fairly simple to have one of the instructions in that family behave as above except for skipping that actual memory operation. Thus, the instructions:
mov ax,[bx+si+5]
lea ax,[bx+si+5]
were implemented almost identically internally. The difference is a skipped step. Both instructions work something like:
temp = fetched immediate operand (5)
temp += bx
temp += si
address_out = temp (skipped for LEA)
trigger 16-bit read (skipped for LEA)
temp = data_in (skipped for LEA)
ax = temp
As for why Intel thought this instruction was worth including, I'm not exactly sure, but the fact that it was cheap to implement would have been a big factor. Another factor would have been the fact that Intel's assembler allowed symbols to be defined relative to the BP
register. If fnord
was defined as a BP
-relative symbol (e.g. BP+8
), one could say:
mov ax,fnord ; Equivalent to "mov ax,[BP+8]"
If one wanted to use something like stosw
to store data to a BP-relative address, being able to say
mov ax,0 ; Data to store
mov cx,16 ; Number of words
lea di,fnord
rep movs fnord ; Address is ignored EXCEPT to note that it's an SS-relative word ptr
was more convenient than:
mov ax,0 ; Data to store
mov cx,16 ; Number of words
mov di,bp
add di,offset fnord (i.e. 8)
rep movs fnord ; Address is ignored EXCEPT to note that it's an SS-relative word ptr
Note that forgetting the world "offset" would cause the contents of location [BP+8]
, rather than the value 8, to be added to DI
. Oops.
The solution above is great but in my case the injection was not working. I needed to use autowireBeanProperties instead, probably due to the way my context is configured:
import org.quartz.spi.TriggerFiredBundle;
import org.springframework.beans.factory.config.AutowireCapableBeanFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.scheduling.quartz.SpringBeanJobFactory;
public final class AutowiringSpringBeanJobFactory extends SpringBeanJobFactory implements
ApplicationContextAware {
private transient AutowireCapableBeanFactory beanFactory;
@Override
public void setApplicationContext(final ApplicationContext context) {
beanFactory = context.getAutowireCapableBeanFactory();
}
@Override
protected Object createJobInstance(final TriggerFiredBundle bundle) throws Exception {
final Object job = super.createJobInstance(bundle);
//beanFactory.autowireBean(job);
beanFactory.autowireBeanProperties(job, AutowireCapableBeanFactory.AUTOWIRE_BY_TYPE, true);
return job;
}
}
The common convention would be to put it in a .sh file that looks like this -
#!/bin/bash
java -cp ".;./supportlibraries/Framework_Core.jar;... etc
Note that '\' become '/'.
You could execute as
sh myfile.sh
or set the x bit on the file
chmod +x myfile.sh
and then just call
myfile.sh
As of API 21, you could also use:
ResourcesCompat.getDrawable(getResources(), R.drawable.name, null);
Instead of ContextCompat.getDrawable(context, android.R.drawable.ic_dialog_email)
There are various symbols which could be considered adequate replacements, including:
| | - two standard (bolded) vertical bars.
▋▋ - ▋
and another▋
▌▌ - ▌
and another▌
▍▍ - ▍
and another▍
▎▎ - ▎
and another▎
❚❚ - ❚
and another ❚
I may have missed out one or two, but I think these are the better ones. Here's a list of symbols just in case.
If you want to call the "inner" function with the "outer" function, you can do this:
function outer() {
function inner() {
alert("hi");
}
return { inner };
}
And on "onclick" event you call the function like this:
<input type="button" onclick="outer().inner();" value="ACTION">?
Here is an email from Guido van Rossum in Python's dev list explaining why he choose not to return self
on operations that affects the object and don't return a new one.
This comes from a coding style (popular in various other languages, I believe especially Lisp revels in it) where a series of side effects on a single object can be chained like this:
x.compress().chop(y).sort(z)
which would be the same as
x.compress() x.chop(y) x.sort(z)
I find the chaining form a threat to readability; it requires that the reader must be intimately familiar with each of the methods. The second form makes it clear that each of these calls acts on the same object, and so even if you don't know the class and its methods very well, you can understand that the second and third call are applied to x (and that all calls are made for their side-effects), and not to something else.
I'd like to reserve chaining for operations that return new values, like string processing operations:
y = x.rstrip("\n").split(":").lower()
if you are using extracted tomcat then,
startup.sh
and shutdown.sh
are two script located in TOMCAT/bin/ to start and shutdown tomcat, You could use that
if tomcat is installed then
/etc/init.d/tomcat5.5 start
/etc/init.d/tomcat5.5 stop
/etc/init.d/tomcat5.5 restart
please refer http://complete-concrete-concise.com/web-tools/how-to-change-localhost-to-a-domain-name
this is best solution ever
Here is a solution on pure js. You can do it with html5 saveAs. For example this lib could be helpful: https://github.com/eligrey/FileSaver.js
Look at the demo: http://eligrey.com/demos/FileSaver.js/
P.S. There is no information about json save, but you can do it changing file type to "application/json"
and format to .json
The syntax you wrote as first is not valid. You can achieve something using the follow:
var map = {"aaa": "rrr", "bbb": "ppp" /* etc */ };
It is not printing correctly because you need to use Base64 encoding. With Java 8 you can encode using Base64 encoder class.
public static String toSHA1(byte[] convertme) throws NoSuchAlgorithmException {
MessageDigest md = MessageDigest.getInstance("SHA-1");
return Base64.getEncoder().encodeToString(md.digest(convertme));
}
Result
This will give you your expected output of 5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8
I write a code to read file line by line to meet my demand which different line have different data type follow articles: read-line-by-line-of-a-file-in-r and determining-number-of-linesrecords. And it should be a better solution for big file, I think. My R version (3.3.2).
con = file("pathtotargetfile", "r")
readsizeof<-2 # read size for one step to caculate number of lines in file
nooflines<-0 # number of lines
while((linesread<-length(readLines(con,readsizeof)))>0) # calculate number of lines. Also a better solution for big file
nooflines<-nooflines+linesread
con = file("pathtotargetfile", "r") # open file again to variable con, since the cursor have went to the end of the file after caculating number of lines
typelist = list(0,'c',0,'c',0,0,'c',0) # a list to specific the lines data type, which means the first line has same type with 0 (e.g. numeric)and second line has same type with 'c' (e.g. character). This meet my demand.
for(i in 1:nooflines) {
tmp <- scan(file=con, nlines=1, what=typelist[[i]], quiet=TRUE)
print(is.vector(tmp))
print(tmp)
}
close(con)
The good news is a transaction in SQL Server can span multiple batches (each exec
is treated as a separate batch.)
You can wrap your EXEC
statements in a BEGIN TRANSACTION
and COMMIT
but you'll need to go a step further and rollback if any errors occur.
Ideally you'd want something like this:
BEGIN TRY
BEGIN TRANSACTION
exec( @sqlHeader)
exec(@sqlTotals)
exec(@sqlLine)
COMMIT
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK
END CATCH
The BEGIN TRANSACTION
and COMMIT
I believe you are already familiar with. The BEGIN TRY
and BEGIN CATCH
blocks are basically there to catch and handle any errors that occur. If any of your EXEC
statements raise an error, the code execution will jump to the CATCH
block.
Your existing SQL building code should be outside the transaction (above) as you always want to keep your transactions as short as possible.
Maybe this answer here will help you. Seems that you want to dispose of the context periodically. This is because the context gets bigger and bigger as the attached entities grows.
It could also be done without a visual client with the following small script.
$ cat ~/bin/pdel
#!/bin/sh
#Todo: add error handling
( p4 -c $1 client -o | perl -pne 's/\blocked\s//' | p4 -c $1 client -i ) && p4 client -d $1
function foo(data)
{
// do stuff with JSON
}
var script = document.createElement('script');
script.src = '//example.com/path/to/jsonp?callback=foo'
document.getElementsByTagName('head')[0].appendChild(script);
// or document.head.appendChild(script) in modern browsers
just wanted to leave my .scss
example here, I think its kinda best practice, especially I think if you do customization its nice to set the width only once! It is not clever to apply it everywhere, you will increase the human factor exponentially.
Im looking forward for your feedback!
// Set your parameters
$widthSmall: 768px;
$widthMedium: 992px;
// Prepare your "function"
@mixin in-between {
@media (min-width:$widthSmall) and (max-width:$widthMedium) {
@content;
}
}
// Apply your "function"
main {
@include in-between {
//Do something between two media queries
padding-bottom: 20px;
}
}
You can specify the remote’s URL by applying the UNC path to the file protocol. This requires you to use four slashes:
git clone file:////<host>/<share>/<path>
For example, if your main machine has the IP 192.168.10.51 and the computer name main
, and it has a share named code
which itself is a git repository, then both of the following commands should work equally:
git clone file:////main/code
git clone file:////192.168.10.51/code
If the Git repository is in a subdirectory, simply append the path:
git clone file:////main/code/project-repository
git clone file:////192.168.10.51/code/project-repository
This worked for me:
ios sdk 9.3
into your build setting of app.xcodeproj valid architecture: armv7 armv7s Build Active architecture : No
Clean and build , worked for me.
Your server side code is JAVA then Follow this below steps
step 1 : Download urlrewritefilter JAR Click Here and save to build path WEB-INF/lib
step 2 : Enable HTML5 Mode $locationProvider.html5Mode(true);
step 3 : set base URL <base href="/example.com/"/>
step 4 : copy and paste to your WEB.XML
<filter>
<filter-name>UrlRewriteFilter</filter-name>
<filter-class>org.tuckey.web.filters.urlrewrite.UrlRewriteFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>UrlRewriteFilter</filter-name>
<url-pattern>/*</url-pattern>
<dispatcher>REQUEST</dispatcher>
<dispatcher>FORWARD</dispatcher>
</filter-mapping>
step 5 : create file in WEN-INF/urlrewrite.xml
<urlrewrite default-match-type="wildcard">
<rule>
<from>/</from>
<to>/index.html</to>
</rule>
<!--Write every state dependent on your project url-->
<rule>
<from>/example</from>
<to>/index.html</to>
</rule>
</urlrewrite>
I don't know why you are against a for loop (presumably you meant a for loop
, not specifically for..in
), they are fast and easy to read. Anyhow, here's some options.
For loop:
function getByValue(arr, value) {
for (var i=0, iLen=arr.length; i<iLen; i++) {
if (arr[i].b == value) return arr[i];
}
}
.filter
function getByValue2(arr, value) {
var result = arr.filter(function(o){return o.b == value;} );
return result? result[0] : null; // or undefined
}
.forEach
function getByValue3(arr, value) {
var result = [];
arr.forEach(function(o){if (o.b == value) result.push(o);} );
return result? result[0] : null; // or undefined
}
If, on the other hand you really did mean for..in and want to find an object with any property with a value of 6, then you must use for..in unless you pass the names to check.
Example
function getByValue4(arr, value) {
var o;
for (var i=0, iLen=arr.length; i<iLen; i++) {
o = arr[i];
for (var p in o) {
if (o.hasOwnProperty(p) && o[p] == value) {
return o;
}
}
}
}
You can use Expressions windows: while debugging, menu window -> Show View -> Expressions, then it has place to type variables of which you need to see contents
here is another easier option
select to_number(column_value) as IDs from xmltable('1,2,3,4,5');
apiclient
is not in the list of third party library supplied by the appengine runtime: http://developers.google.com/appengine/docs/python/tools/libraries27 .
You need to copy apiclient
into your project directory & you need to copy these uritemplate
& httplib2
too.
Note: Any third party library that are not supplied in the documentation list must copy to your appengine project directory
I know this question is old but, I noticed most people are using a checkbox. The accepted answer uses a button, but cannot work with several buttons (ex. one at top of page one at bottom). So here is a modification that does both.
HTML
<a href="#" class="check-box-machine my-button-style">Check All</a>
jQuery
var ischecked = false;
$(".check-box-machine").click(function(e) {
e.preventDefault();
if (ischecked == false) {
$("input:checkbox").attr("checked","checked");
$(".check-box-machine").html("Uncheck All");
ischecked = true;
} else {
$("input:checkbox").removeAttr("checked");
$(".check-box-machine").html("Check All");
ischecked = false;
}
});
This will allow you to have as many buttons as you would like with changing text and checkbox values. I included an e.preventDefault()
call because this will stop the page from jumping to the top due to the href="#"
part.
So simple and concise. Thanks to the Open source developer, cketti for sharing this solution:
String mailto = "mailto:[email protected]" +
"?cc=" + "[email protected]" +
"&subject=" + Uri.encode(subject) +
"&body=" + Uri.encode(bodyText);
Intent emailIntent = new Intent(Intent.ACTION_SENDTO);
emailIntent.setData(Uri.parse(mailto));
try {
startActivity(emailIntent);
} catch (ActivityNotFoundException e) {
//TODO: Handle case where no email app is available
}
And this is the link to his/her gist.
I was having the same problem. I tried a
npm config set registry http://registry.npmjs.org/
to turn off https. I also tried
npm set progress=false
to turn off the progress bar (it has been reported to slow down downloads).
The problem was with my network driver. I just needed to reboot and the lag went away.
According to HTML living standard specification, the load
event is
Fired at the Window when the document has finished loading; fired at an element containing a resource (e.g. img, embed) when its resource has finished loading
I.e. load
event is not fired on document
object.
Credit: Why does document.addEventListener(‘load’, handler) not work?
As per muhammad-adil suggested for SDK ver 21 and above
android:indeterminateTint="@color/orange"
in XML Works for me, is easy enough.
If your looking for something a little more native, you can use getGnuWin32 to install all of the unix command line tools that have been ported. That plus winBash gives you most of a working unix environment. Add console2 for a better terminal emulator and you almost can't tell your on windows!
Cygwin is a better toolkit overall, but I have found myself running into suprise problems because of the divide between it and windows. None of these solutions are as good as a native linux system though.
You may want to look into using virtualbox to create a linux VM with your distro of choice. Set it up to share a folder with the host os, and you can use a true linux development environment, and share with windows. Just watch out for those EOL markers, they get ya every time.
You can make the following sql query
IF ((SELECT COUNT(*) FROM table1 WHERE project = 1) > 0)
SELECT product, price FROM table1 WHERE project = 1
ELSE IF ((SELECT COUNT(*) FROM table1 WHERE project = 2) > 0)
SELECT product, price FROM table1 WHERE project = 2
ELSE IF ((SELECT COUNT(*) FROM table1 WHERE project = 3) > 0)
SELECT product, price FROM table1 WHERE project = 3
Did you write
String guid = System.Guid.NewGuid().ToString;
or
String guid = System.Guid.NewGuid().ToString();
notice the paranthesis
Doesn't all of this assume that the base class is a new-style class?
class A:
def __init__(self):
print("A.__init__()")
class B(A):
def __init__(self):
print("B.__init__()")
super(B, self).__init__()
Will not work in Python 2. class A
must be new-style, i.e: class A(object)
It should be legal to put a semicolon directly before the WITH keyword.
You can also have your query return the time as a Unix timestamp. That would get rid of the need to call strtotime()
and make things a bit less intensive on the PHP side...
select UNIX_TIMESTAMP(timsstamp) as unixtime from the_table where id = 1234;
Then in PHP just use the date()
function to format it whichever way you'd like.
<?php
echo date('l jS \of F Y h:i:s A', $row->unixtime);
?>
or
<?php
echo date('F j, Y, g:i a', $row->unixtime);
?>
I like this approach as opposed to using MySQL's DATE_FORMAT
function, because it allows you to reuse the same query to grab the data and allows you to alter the formatting in PHP.
It's annoying to have two different queries just to change the way the date looks in the UI.
This one drove me completely insane and I couldn't find anything helpful to solve it. This is probably not the reason most people have this issue but I just hope that someone else will benefit from this answer.
What caused my problem was a <clear />
statement in the <assemblies>
config section. I had added this because in production it had been required because there were multiple unrelated applications on the same hosting plan and I didn't want any of them to be affected by others. The more correct solution would have been to have just used web config transforms on publish.
Hope this helps someone else!
The set
statement doesn't treat spaces the way you expect; your variable is really named Pathname[space]
and is equal to [space]C:\Program Files
.
Remove the spaces from both sides of the =
sign, and put the value in double quotes:
set Pathname="C:\Program Files"
Also, if your command prompt is not open to C:\, then using cd
alone can't change drives.
Use
cd /d %Pathname%
or
pushd %Pathname%
instead.
There is no installcheck element in the bootstrapper package manifest shipped with Visual C++. Guess Microsoft wants to always install if you set it as a prerequisite.
Of course you can still call MsiQueryProductState to check if the VC redist package is installed via MSI, The package code can be found by running
wmic product get
at command line, or if you are already at wmic:root\cli, run
product where "Caption like '%C++ 2012%'"
If you are running on OS X using Docker tool, follow this.
Restart the daemon and configure your environment:
docker-machine restart
And then
docker-machine env
Finally,
eval $(docker-machine env)
To test the daemon is running:
docker ps -a
or docker-machine ls
. This will list all containers.
I looked into what you are trying to achieve, because I remember I wanted to do the same thing. Inspired by Vinay I wrote something that works for me and I sort of understand. But I am not an expert, so please be careful.
I don't know how Vinay knows you are using Mac OS X. But it should work kind of like this with most unix-like OS. Really helpful as resource is opengroup.org
Make sure to flush the buffer before using the function.
#include <stdio.h>
#include <termios.h> //termios, TCSANOW, ECHO, ICANON
#include <unistd.h> //STDIN_FILENO
void pressKey()
{
//the struct termios stores all kinds of flags which can manipulate the I/O Interface
//I have an old one to save the old settings and a new
static struct termios oldt, newt;
printf("Press key to continue....\n");
//tcgetattr gets the parameters of the current terminal
//STDIN_FILENO will tell tcgetattr that it should write the settings
// of stdin to oldt
tcgetattr( STDIN_FILENO, &oldt);
//now the settings will be copied
newt = oldt;
//two of the c_lflag will be turned off
//ECHO which is responsible for displaying the input of the user in the terminal
//ICANON is the essential one! Normally this takes care that one line at a time will be processed
//that means it will return if it sees a "\n" or an EOF or an EOL
newt.c_lflag &= ~(ICANON | ECHO );
//Those new settings will be set to STDIN
//TCSANOW tells tcsetattr to change attributes immediately.
tcsetattr( STDIN_FILENO, TCSANOW, &newt);
//now the char wil be requested
getchar();
//the old settings will be written back to STDIN
tcsetattr( STDIN_FILENO, TCSANOW, &oldt);
}
int main(void)
{
pressKey();
printf("END\n");
return 0;
}
O_NONBLOCK seems also to be an important flag, but it didn't change anything for me.
I appreciate if people with some deeper knowledge would comment on this and give some advice.
I had loads of trouble with this too. I have data and labels in separate arrays then I reinitialise the chart data. I added the line.destroy(); as suggested above which has done the trick
var ctx = document.getElementById("canvas").getContext("2d");_x000D_
if(window.myLine){_x000D_
window.myLine.destroy();_x000D_
}_x000D_
window.myLine = new Chart(ctx).Line(lineChartData, {_x000D_
etc_x000D_
etc
_x000D_
I believe you have to constraint T with a where statement to only allow objects with a new constructor.
RIght now it accepts anything including objects without it.
An AXD file is a file used by ASP.NET applications for handling embedded resource requests. It contains instructions for retrieving embedded resources, such as images, JavaScript (.JS) files, and.CSS files.
AXD files are used for injecting resources into the client-side webpage and access them on the server in a standard way.
Open CMD
Run this
SQLCMD -L
You will get list of SQL Server instance
This question is already answered here
The classpath never includes specific files. It includes directories and jar files. So, put that file in a directory that is in your classpath
.
Log4j
properties aren't (normally) used in developing apps (unless you're debugging Eclipse itself!). So what you really want to to build the executable Java app (Application, WAR, EAR or whatever) and include the Log4j
properties in the runtime classpath.
And when you want all tables for some reason ?
You can generate these commands in SSMS:
SELECT
CONCAT('sqlcmd -S ',
'Your(local?)SERVERhere'
,' -d',
'YourDB'
,' -E -s, -W -Q "SELECT * FROM ',
TABLE_NAME,
'" >',
TABLE_NAME,
'.csv') FROM INFORMATION_SCHEMA.TABLES
And get again rows like this
sqlcmd -S ... -d... -E -s, -W -Q "SELECT * FROM table1" >table1.csv
sqlcmd -S ... -d... -E -s, -W -Q "SELECT * FROM table2" >table2.csv
...
There is also option to use better TAB as delimiter, but it would need a strange Unicode character - using Alt+9 in CMD, it came like this ? (Unicode CB25), but works only by copy/paste to command line not in batch.
You can use TextView.setLineSpacing(n,m)
function.
For our application, we had a client server architecture and we only allowed decrypting/encrypting data in the server level. Hence the JCE files are only needed there.
We had another problem where we needed to update a security jar on the client machines, through JNLP, it overwrites the libraries in${java.home}/lib/security/
and the JVM on first run.
That made it work.
You can use the array_key_exists()
built-in function:
if (array_key_exists('id', $_GET)) {
echo $_GET['id'];
}
or the isset()
built-in function:
if (isset($_GET['id'])) {
echo $_GET['id'];
}
When you're going to work with such time series in Python, pandas
is indispensable. And here's the good news: it comes with a historical data downloader for Yahoo: pandas.io.data.DataReader
.
from pandas.io.data import DataReader
from datetime import datetime
ibm = DataReader('IBM', 'yahoo', datetime(2000, 1, 1), datetime(2012, 1, 1))
print(ibm['Adj Close'])
Here's an example from the pandas
documentation.
Update for pandas >= 0.19:
The pandas.io.data
module has been removed from pandas>=0.19
onwards. Instead, you should use the separate pandas-datareader
package. Install with:
pip install pandas-datareader
And then you can do this in Python:
import pandas_datareader as pdr
from datetime import datetime
ibm = pdr.get_data_yahoo(symbols='IBM', start=datetime(2000, 1, 1), end=datetime(2012, 1, 1))
print(ibm['Adj Close'])
Use in simple way with php prebuilt function:
function checkmydate($date) {
$tempDate = explode('-', $date);
// checkdate(month, day, year)
return checkdate($tempDate[1], $tempDate[2], $tempDate[0]);
}
Test
checkmydate('2015-12-01'); //true
checkmydate('2015-14-04'); //false
You can't run PHP in .html files because the server does not recognize that as a valid PHP extension unless you tell it to. To do this you need to create a .htaccess file in your root web directory and add this line to it:
AddType application/x-httpd-php .htm .html
This will tell Apache to process files with a .htm or .html file extension as PHP files.
curl
sends POST requests with the default content type of application/x-www-form-urlencoded
. If you want to send a JSON request, you will have to specify the correct content type header:
$ curl -vX POST http://server/api/v1/places.json -d @testplace.json \
--header "Content-Type: application/json"
But that will only work if the server accepts json input. The .json
at the end of the url may only indicate that the output is json, it doesn't necessarily mean that it also will handle json input. The API documentation should give you a hint on whether it does or not.
The reason you get a 401
and not some other error is probably because the server can't extract the auth_token
from your request.
All of the answers to this question are wrong in one way or another.
IFS=', ' read -r -a array <<< "$string"
1: This is a misuse of $IFS
. The value of the $IFS
variable is not taken as a single variable-length string separator, rather it is taken as a set of single-character string separators, where each field that read
splits off from the input line can be terminated by any character in the set (comma or space, in this example).
Actually, for the real sticklers out there, the full meaning of $IFS
is slightly more involved. From the bash manual:
The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline>, the default, then sequences of <space>, <tab>, and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters <space>, <tab>, and <newline> are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs.
Basically, for non-default non-null values of $IFS
, fields can be separated with either (1) a sequence of one or more characters that are all from the set of "IFS whitespace characters" (that is, whichever of <space>, <tab>, and <newline> ("newline" meaning line feed (LF)) are present anywhere in $IFS
), or (2) any non-"IFS whitespace character" that's present in $IFS
along with whatever "IFS whitespace characters" surround it in the input line.
For the OP, it's possible that the second separation mode I described in the previous paragraph is exactly what he wants for his input string, but we can be pretty confident that the first separation mode I described is not correct at all. For example, what if his input string was 'Los Angeles, United States, North America'
?
IFS=', ' read -ra a <<<'Los Angeles, United States, North America'; declare -p a;
## declare -a a=([0]="Los" [1]="Angeles" [2]="United" [3]="States" [4]="North" [5]="America")
2: Even if you were to use this solution with a single-character separator (such as a comma by itself, that is, with no following space or other baggage), if the value of the $string
variable happens to contain any LFs, then read
will stop processing once it encounters the first LF. The read
builtin only processes one line per invocation. This is true even if you are piping or redirecting input only to the read
statement, as we are doing in this example with the here-string mechanism, and thus unprocessed input is guaranteed to be lost. The code that powers the read
builtin has no knowledge of the data flow within its containing command structure.
You could argue that this is unlikely to cause a problem, but still, it's a subtle hazard that should be avoided if possible. It is caused by the fact that the read
builtin actually does two levels of input splitting: first into lines, then into fields. Since the OP only wants one level of splitting, this usage of the read
builtin is not appropriate, and we should avoid it.
3: A non-obvious potential issue with this solution is that read
always drops the trailing field if it is empty, although it preserves empty fields otherwise. Here's a demo:
string=', , a, , b, c, , , '; IFS=', ' read -ra a <<<"$string"; declare -p a;
## declare -a a=([0]="" [1]="" [2]="a" [3]="" [4]="b" [5]="c" [6]="" [7]="")
Maybe the OP wouldn't care about this, but it's still a limitation worth knowing about. It reduces the robustness and generality of the solution.
This problem can be solved by appending a dummy trailing delimiter to the input string just prior to feeding it to read
, as I will demonstrate later.
string="1:2:3:4:5"
set -f # avoid globbing (expansion of *).
array=(${string//:/ })
t="one,two,three"
a=($(echo $t | tr ',' "\n"))
(Note: I added the missing parentheses around the command substitution which the answerer seems to have omitted.)
string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)
These solutions leverage word splitting in an array assignment to split the string into fields. Funnily enough, just like read
, general word splitting also uses the $IFS
special variable, although in this case it is implied that it is set to its default value of <space><tab><newline>, and therefore any sequence of one or more IFS characters (which are all whitespace characters now) is considered to be a field delimiter.
This solves the problem of two levels of splitting committed by read
, since word splitting by itself constitutes only one level of splitting. But just as before, the problem here is that the individual fields in the input string can already contain $IFS
characters, and thus they would be improperly split during the word splitting operation. This happens to not be the case for any of the sample input strings provided by these answerers (how convenient...), but of course that doesn't change the fact that any code base that used this idiom would then run the risk of blowing up if this assumption were ever violated at some point down the line. Once again, consider my counterexample of 'Los Angeles, United States, North America'
(or 'Los Angeles:United States:North America'
).
Also, word splitting is normally followed by filename expansion (aka pathname expansion aka globbing), which, if done, would potentially corrupt words containing the characters *
, ?
, or [
followed by ]
(and, if extglob
is set, parenthesized fragments preceded by ?
, *
, +
, @
, or !
) by matching them against file system objects and expanding the words ("globs") accordingly. The first of these three answerers has cleverly undercut this problem by running set -f
beforehand to disable globbing. Technically this works (although you should probably add set +f
afterward to reenable globbing for subsequent code which may depend on it), but it's undesirable to have to mess with global shell settings in order to hack a basic string-to-array parsing operation in local code.
Another issue with this answer is that all empty fields will be lost. This may or may not be a problem, depending on the application.
Note: If you're going to use this solution, it's better to use the ${string//:/ }
"pattern substitution" form of parameter expansion, rather than going to the trouble of invoking a command substitution (which forks the shell), starting up a pipeline, and running an external executable (tr
or sed
), since parameter expansion is purely a shell-internal operation. (Also, for the tr
and sed
solutions, the input variable should be double-quoted inside the command substitution; otherwise word splitting would take effect in the echo
command and potentially mess with the field values. Also, the $(...)
form of command substitution is preferable to the old `...`
form since it simplifies nesting of command substitutions and allows for better syntax highlighting by text editors.)
str="a, b, c, d" # assuming there is a space after ',' as in Q
arr=(${str//,/}) # delete all occurrences of ','
This answer is almost the same as #2. The difference is that the answerer has made the assumption that the fields are delimited by two characters, one of which being represented in the default $IFS
, and the other not. He has solved this rather specific case by removing the non-IFS-represented character using a pattern substitution expansion and then using word splitting to split the fields on the surviving IFS-represented delimiter character.
This is not a very generic solution. Furthermore, it can be argued that the comma is really the "primary" delimiter character here, and that stripping it and then depending on the space character for field splitting is simply wrong. Once again, consider my counterexample: 'Los Angeles, United States, North America'
.
Also, again, filename expansion could corrupt the expanded words, but this can be prevented by temporarily disabling globbing for the assignment with set -f
and then set +f
.
Also, again, all empty fields will be lost, which may or may not be a problem depending on the application.
string='first line
second line
third line'
oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"
This is similar to #2 and #3 in that it uses word splitting to get the job done, only now the code explicitly sets $IFS
to contain only the single-character field delimiter present in the input string. It should be repeated that this cannot work for multicharacter field delimiters such as the OP's comma-space delimiter. But for a single-character delimiter like the LF used in this example, it actually comes close to being perfect. The fields cannot be unintentionally split in the middle as we saw with previous wrong answers, and there is only one level of splitting, as required.
One problem is that filename expansion will corrupt affected words as described earlier, although once again this can be solved by wrapping the critical statement in set -f
and set +f
.
Another potential problem is that, since LF qualifies as an "IFS whitespace character" as defined earlier, all empty fields will be lost, just as in #2 and #3. This would of course not be a problem if the delimiter happens to be a non-"IFS whitespace character", and depending on the application it may not matter anyway, but it does vitiate the generality of the solution.
So, to sum up, assuming you have a one-character delimiter, and it is either a non-"IFS whitespace character" or you don't care about empty fields, and you wrap the critical statement in set -f
and set +f
, then this solution works, but otherwise not.
(Also, for information's sake, assigning a LF to a variable in bash can be done more easily with the $'...'
syntax, e.g. IFS=$'\n';
.)
countries='Paris, France, Europe'
OIFS="$IFS"
IFS=', ' array=($countries)
IFS="$OIFS"
IFS=', ' eval 'array=($string)'
This solution is effectively a cross between #1 (in that it sets $IFS
to comma-space) and #2-4 (in that it uses word splitting to split the string into fields). Because of this, it suffers from most of the problems that afflict all of the above wrong answers, sort of like the worst of all worlds.
Also, regarding the second variant, it may seem like the eval
call is completely unnecessary, since its argument is a single-quoted string literal, and therefore is statically known. But there's actually a very non-obvious benefit to using eval
in this way. Normally, when you run a simple command which consists of a variable assignment only, meaning without an actual command word following it, the assignment takes effect in the shell environment:
IFS=', '; ## changes $IFS in the shell environment
This is true even if the simple command involves multiple variable assignments; again, as long as there's no command word, all variable assignments affect the shell environment:
IFS=', ' array=($countries); ## changes both $IFS and $array in the shell environment
But, if the variable assignment is attached to a command name (I like to call this a "prefix assignment") then it does not affect the shell environment, and instead only affects the environment of the executed command, regardless whether it is a builtin or external:
IFS=', ' :; ## : is a builtin command, the $IFS assignment does not outlive it
IFS=', ' env; ## env is an external command, the $IFS assignment does not outlive it
Relevant quote from the bash manual:
If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.
It is possible to exploit this feature of variable assignment to change $IFS
only temporarily, which allows us to avoid the whole save-and-restore gambit like that which is being done with the $OIFS
variable in the first variant. But the challenge we face here is that the command we need to run is itself a mere variable assignment, and hence it would not involve a command word to make the $IFS
assignment temporary. You might think to yourself, well why not just add a no-op command word to the statement like the : builtin
to make the $IFS
assignment temporary? This does not work because it would then make the $array
assignment temporary as well:
IFS=', ' array=($countries) :; ## fails; new $array value never escapes the : command
So, we're effectively at an impasse, a bit of a catch-22. But, when eval
runs its code, it runs it in the shell environment, as if it was normal, static source code, and therefore we can run the $array
assignment inside the eval
argument to have it take effect in the shell environment, while the $IFS
prefix assignment that is prefixed to the eval
command will not outlive the eval
command. This is exactly the trick that is being used in the second variant of this solution:
IFS=', ' eval 'array=($string)'; ## $IFS does not outlive the eval command, but $array does
So, as you can see, it's actually quite a clever trick, and accomplishes exactly what is required (at least with respect to assignment effectation) in a rather non-obvious way. I'm actually not against this trick in general, despite the involvement of eval
; just be careful to single-quote the argument string to guard against security threats.
But again, because of the "worst of all worlds" agglomeration of problems, this is still a wrong answer to the OP's requirement.
IFS=', '; array=(Paris, France, Europe)
IFS=' ';declare -a array=(Paris France Europe)
Um... what? The OP has a string variable that needs to be parsed into an array. This "answer" starts with the verbatim contents of the input string pasted into an array literal. I guess that's one way to do it.
It looks like the answerer may have assumed that the $IFS
variable affects all bash parsing in all contexts, which is not true. From the bash manual:
IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is <space><tab><newline>.
So the $IFS
special variable is actually only used in two contexts: (1) word splitting that is performed after expansion (meaning not when parsing bash source code) and (2) for splitting input lines into words by the read
builtin.
Let me try to make this clearer. I think it might be good to draw a distinction between parsing and execution. Bash must first parse the source code, which obviously is a parsing event, and then later it executes the code, which is when expansion comes into the picture. Expansion is really an execution event. Furthermore, I take issue with the description of the $IFS
variable that I just quoted above; rather than saying that word splitting is performed after expansion, I would say that word splitting is performed during expansion, or, perhaps even more precisely, word splitting is part of the expansion process. The phrase "word splitting" refers only to this step of expansion; it should never be used to refer to the parsing of bash source code, although unfortunately the docs do seem to throw around the words "split" and "words" a lot. Here's a relevant excerpt from the linux.die.net version of the bash manual:
Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
You could argue the GNU version of the manual does slightly better, since it opts for the word "tokens" instead of "words" in the first sentence of the Expansion section:
Expansion is performed on the command line after it has been split into tokens.
The important point is, $IFS
does not change the way bash parses source code. Parsing of bash source code is actually a very complex process that involves recognition of the various elements of shell grammar, such as command sequences, command lists, pipelines, parameter expansions, arithmetic substitutions, and command substitutions. For the most part, the bash parsing process cannot be altered by user-level actions like variable assignments (actually, there are some minor exceptions to this rule; for example, see the various compatxx
shell settings, which can change certain aspects of parsing behavior on-the-fly). The upstream "words"/"tokens" that result from this complex parsing process are then expanded according to the general process of "expansion" as broken down in the above documentation excerpts, where word splitting of the expanded (expanding?) text into downstream words is simply one step of that process. Word splitting only touches text that has been spit out of a preceding expansion step; it does not affect literal text that was parsed right off the source bytestream.
string='first line
second line
third line'
while read -r line; do lines+=("$line"); done <<<"$string"
This is one of the best solutions. Notice that we're back to using read
. Didn't I say earlier that read
is inappropriate because it performs two levels of splitting, when we only need one? The trick here is that you can call read
in such a way that it effectively only does one level of splitting, specifically by splitting off only one field per invocation, which necessitates the cost of having to call it repeatedly in a loop. It's a bit of a sleight of hand, but it works.
But there are problems. First: When you provide at least one NAME argument to read
, it automatically ignores leading and trailing whitespace in each field that is split off from the input string. This occurs whether $IFS
is set to its default value or not, as described earlier in this post. Now, the OP may not care about this for his specific use-case, and in fact, it may be a desirable feature of the parsing behavior. But not everyone who wants to parse a string into fields will want this. There is a solution, however: A somewhat non-obvious usage of read
is to pass zero NAME arguments. In this case, read
will store the entire input line that it gets from the input stream in a variable named $REPLY
, and, as a bonus, it does not strip leading and trailing whitespace from the value. This is a very robust usage of read
which I've exploited frequently in my shell programming career. Here's a demonstration of the difference in behavior:
string=$' a b \n c d \n e f '; ## input string
a=(); while read -r line; do a+=("$line"); done <<<"$string"; declare -p a;
## declare -a a=([0]="a b" [1]="c d" [2]="e f") ## read trimmed surrounding whitespace
a=(); while read -r; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]=" a b " [1]=" c d " [2]=" e f ") ## no trimming
The second issue with this solution is that it does not actually address the case of a custom field separator, such as the OP's comma-space. As before, multicharacter separators are not supported, which is an unfortunate limitation of this solution. We could try to at least split on comma by specifying the separator to the -d
option, but look what happens:
string='Paris, France, Europe';
a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France")
Predictably, the unaccounted surrounding whitespace got pulled into the field values, and hence this would have to be corrected subsequently through trimming operations (this could also be done directly in the while-loop). But there's another obvious error: Europe is missing! What happened to it? The answer is that read
returns a failing return code if it hits end-of-file (in this case we can call it end-of-string) without encountering a final field terminator on the final field. This causes the while-loop to break prematurely and we lose the final field.
Technically this same error afflicted the previous examples as well; the difference there is that the field separator was taken to be LF, which is the default when you don't specify the -d
option, and the <<<
("here-string") mechanism automatically appends a LF to the string just before it feeds it as input to the command. Hence, in those cases, we sort of accidentally solved the problem of a dropped final field by unwittingly appending an additional dummy terminator to the input. Let's call this solution the "dummy-terminator" solution. We can apply the dummy-terminator solution manually for any custom delimiter by concatenating it against the input string ourselves when instantiating it in the here-string:
a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,"; declare -p a;
declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")
There, problem solved. Another solution is to only break the while-loop if both (1) read
returned failure and (2) $REPLY
is empty, meaning read
was not able to read any characters prior to hitting end-of-file. Demo:
a=(); while read -rd,|| [[ -n "$REPLY" ]]; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')
This approach also reveals the secretive LF that automatically gets appended to the here-string by the <<<
redirection operator. It could of course be stripped off separately through an explicit trimming operation as described a moment ago, but obviously the manual dummy-terminator approach solves it directly, so we could just go with that. The manual dummy-terminator solution is actually quite convenient in that it solves both of these two problems (the dropped-final-field problem and the appended-LF problem) in one go.
So, overall, this is quite a powerful solution. It's only remaining weakness is a lack of support for multicharacter delimiters, which I will address later.
string='first line
second line
third line'
readarray -t lines <<<"$string"
(This is actually from the same post as #7; the answerer provided two solutions in the same post.)
The readarray
builtin, which is a synonym for mapfile
, is ideal. It's a builtin command which parses a bytestream into an array variable in one shot; no messing with loops, conditionals, substitutions, or anything else. And it doesn't surreptitiously strip any whitespace from the input string. And (if -O
is not given) it conveniently clears the target array before assigning to it. But it's still not perfect, hence my criticism of it as a "wrong answer".
First, just to get this out of the way, note that, just like the behavior of read
when doing field-parsing, readarray
drops the trailing field if it is empty. Again, this is probably not a concern for the OP, but it could be for some use-cases. I'll come back to this in a moment.
Second, as before, it does not support multicharacter delimiters. I'll give a fix for this in a moment as well.
Third, the solution as written does not parse the OP's input string, and in fact, it cannot be used as-is to parse it. I'll expand on this momentarily as well.
For the above reasons, I still consider this to be a "wrong answer" to the OP's question. Below I'll give what I consider to be the right answer.
Right answer
Here's a naïve attempt to make #8 work by just specifying the -d
option:
string='Paris, France, Europe';
readarray -td, a <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')
We see the result is identical to the result we got from the double-conditional approach of the looping read
solution discussed in #7. We can almost solve this with the manual dummy-terminator trick:
readarray -td, a <<<"$string,"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe" [3]=$'\n')
The problem here is that readarray
preserved the trailing field, since the <<<
redirection operator appended the LF to the input string, and therefore the trailing field was not empty (otherwise it would've been dropped). We can take care of this by explicitly unsetting the final array element after-the-fact:
readarray -td, a <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")
The only two problems that remain, which are actually related, are (1) the extraneous whitespace that needs to be trimmed, and (2) the lack of support for multicharacter delimiters.
The whitespace could of course be trimmed afterward (for example, see How to trim whitespace from a Bash variable?). But if we can hack a multicharacter delimiter, then that would solve both problems in one shot.
Unfortunately, there's no direct way to get a multicharacter delimiter to work. The best solution I've thought of is to preprocess the input string to replace the multicharacter delimiter with a single-character delimiter that will be guaranteed not to collide with the contents of the input string. The only character that has this guarantee is the NUL byte. This is because, in bash (though not in zsh, incidentally), variables cannot contain the NUL byte. This preprocessing step can be done inline in a process substitution. Here's how to do it using awk:
readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; }' <<<"$string, "); unset 'a[-1]';
declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")
There, finally! This solution will not erroneously split fields in the middle, will not cut out prematurely, will not drop empty fields, will not corrupt itself on filename expansions, will not automatically strip leading and trailing whitespace, will not leave a stowaway LF on the end, does not require loops, and does not settle for a single-character delimiter.
Trimming solution
Lastly,
Solution Using jQuery
<script src="http://code.jquery.com/jquery-2.1.0.min.js"></script>
<style>
#form label{float:left; width:140px;}
#error_msg{color:red; font-weight:bold;}
</style>
<script>
$(document).ready(function(){
var $submitBtn = $("#form input[type='submit']");
var $passwordBox = $("#password");
var $confirmBox = $("#confirm_password");
var $errorMsg = $('<span id="error_msg">Passwords do not match.</span>');
// This is incase the user hits refresh - some browsers will maintain the disabled state of the button.
$submitBtn.removeAttr("disabled");
function checkMatchingPasswords(){
if($confirmBox.val() != "" && $passwordBox.val != ""){
if( $confirmBox.val() != $passwordBox.val() ){
$submitBtn.attr("disabled", "disabled");
$errorMsg.insertAfter($confirmBox);
}
}
}
function resetPasswordError(){
$submitBtn.removeAttr("disabled");
var $errorCont = $("#error_msg");
if($errorCont.length > 0){
$errorCont.remove();
}
}
$("#confirm_password, #password")
.on("keydown", function(e){
/* only check when the tab or enter keys are pressed
* to prevent the method from being called needlessly */
if(e.keyCode == 13 || e.keyCode == 9) {
checkMatchingPasswords();
}
})
.on("blur", function(){
// also check when the element looses focus (clicks somewhere else)
checkMatchingPasswords();
})
.on("focus", function(){
// reset the error message when they go to make a change
resetPasswordError();
})
});
</script>
And update your form accordingly:
<form id="form" name="form" method="post" action="registration.php">
<label for="username">Username : </label>
<input name="username" id="username" type="text" /></label><br/>
<label for="password">Password :</label>
<input name="password" id="password" type="password" /><br/>
<label for="confirm_password">Confirm Password:</label>
<input type="password" name="confirm_password" id="confirm_password" /><br/>
<input type="submit" name="submit" value="registration" />
</form>
This will do precisely what you asked for:
It is advisable not to use a keyup event listener for every keypress because really you only need to evaluate it when the user is done entering information. If someone types quickly on a slow machine, they may perceive lag as each keystroke will kick off the function.
Also, in your form you are using labels wrong. The label element has a "for" attribute which should correspond with the id of the form element. This is so that when visually impaired people use a screen reader to call out the form field, it will know text belongs to which field.
To call the function on click of some html element (control).
$('#controlID').click(myFunction);
You will need to ensure you bind the event when your html element is ready on which you binding the event. You can put the code in document.ready
$(document).ready(function(){
$('#controlID').click(myFunction);
});
You can use anonymous function to bind the event to the html element.
$(document).ready(function(){
$('#controlID').click(function(){
$.messager.show({
title:'My Title',
msg:'The message content',
showType:'fade',
style:{
right:'',
bottom:''
}
});
});
});
If you want to bind click with many elements you can use class selector
$('.someclass').click(myFunction);
Edit based on comments by OP, If you want to call function under some condition
You can use if for conditional execution, for example,
if(a == 3)
myFunction();
Dont forget, that a low-level basics of this behaviour is the type-casting that integrated in JS-engine entirely.
Slice just takes object (thanks to existing arguments.length property) and returns array-object casted after doing all operations on that.
The same logics you can test if you try to treat String-method with an INT-value:
String.prototype.bold.call(11); // returns "<b>11</b>"
And that explains statement above.
An example using jQuery is below. Hope this helps
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<title>My jQuery JSON Web Page</title>
<head>
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
<script type="text/javascript">
JSONTest = function() {
var resultDiv = $("#resultDivContainer");
$.ajax({
url: "https://example.com/api/",
type: "POST",
data: { apiKey: "23462", method: "example", ip: "208.74.35.5" },
dataType: "json",
success: function (result) {
switch (result) {
case true:
processResponse(result);
break;
default:
resultDiv.html(result);
}
},
error: function (xhr, ajaxOptions, thrownError) {
alert(xhr.status);
alert(thrownError);
}
});
};
</script>
</head>
<body>
<h1>My jQuery JSON Web Page</h1>
<div id="resultDivContainer"></div>
<button type="button" onclick="JSONTest()">JSON</button>
</body>
</html>
Firebug debug process
For current datetime, you can use now() function in postgresql insert query.
You can also refer following link.
insert statement in postgres for data type timestamp without time zone NOT NULL,.
Three related packages for covering this issue with many advanced options are:
Angular Component
A component is one of the basic building blocks of an Angular app. An app can have more than one component. In a normal app, a component contains an HTML view page class file, a class file that controls the behaviour of the HTML page and the CSS/scss file to style your HTML view. A component can be created using @Component
decorator that is part of @angular/core
module.
import { Component } from '@angular/core';
and to create a component
@Component({selector: 'greet', template: 'Hello {{name}}!'})
class Greet {
name: string = 'World';
}
To create a component or angular app here is the tutorial
Angular Module
An angular module is set of angular basic building blocks like component, directives, services etc. An app can have more than one module.
A module can be created using @NgModule
decorator.
@NgModule({
imports: [ BrowserModule ],
declarations: [ AppComponent ],
bootstrap: [ AppComponent ]
})
export class AppModule { }
summary:
df = pd.DataFrame({'money': [100.456, 200.789], 'share': ['100,000', '200,000']})
print(df)
print(df.to_string(formatters={'money': '${:,.2f}'.format}))
for col_name in ('share',):
df[col_name] = df[col_name].map(lambda p: int(p.replace(',', '')))
print(df)
"""
money share
0 100.456 100,000
1 200.789 200,000
money share
0 $100.46 100,000
1 $200.79 200,000
money share
0 100.456 100000
1 200.789 200000
"""
This is how I do it with an array Func<>:
var tasks = new Func<Task>[]
{
() => myAsyncWork1(),
() => myAsyncWork2(),
() => myAsyncWork3()
};
await Task.WhenAll(tasks.Select(task => task()).ToArray()); //Async
Task.WaitAll(tasks.Select(task => task()).ToArray()); //Or use WaitAll for Sync
First of all, wave bye-bye to those quotes:
background-image: url(nickcage.jpg); // No quotes around the file name
Next, if your html, css and image are all in the same directory then removing the quotes should fix it. If, however, your css or image are in subdirectories of where your html lives, you'll want to make sure you correctly path to the image:
background-image: url(../images/nickcage.jpg); // css and image live in subdorectories
background-image: url(images/nickcage.jpg); // css lives with html but images is a subdirectory
Hope it helps.
The Boolean object doesn't have a 'parse' method. Boolean('false')
returns true, so that won't work. !!'false'
also returns true
, so that won't work also.
If you want string 'true'
to return boolean true
and string 'false'
to return boolean false
, then the simplest solution is to use eval()
. eval('true')
returns true and eval('false')
returns false. Keep in mind the performance implications when using eval()
though.
How about exporting the variable, but only inside the subshell?:
(export FOO=bar && somecommand someargs | somecommand2)
Keith has a point, to unconditionally execute the commands, do this:
(export FOO=bar; somecommand someargs | somecommand2)
IntelliJ IDEA 14 & 15 & 2017:
View > Tool Windows > Terminal
or
Alt + F12
Also consider using Array()
. From the Ruby Community Style Guide:
Use Array() instead of explicit Array check or [*var], when dealing with a variable you want to treat as an Array, but you're not certain it's an array.
# bad
paths = [paths] unless paths.is_a? Array
paths.each { |path| do_something(path) }
# bad (always creates a new Array instance)
[*paths].each { |path| do_something(path) }
# good (and a bit more readable)
Array(paths).each { |path| do_something(path) }
Alias to node
itself to avoid updating the default alias along with node version updates later on.
nvm alias default node
Try to start path\to\cygwin\bin\bash.exe
Another neat option is to use the Directive
as an element and not as an attribute.
@Directive({
selector: 'app-directive'
})
export class InformativeDirective implements AfterViewInit {
@Input()
public first: string;
@Input()
public second: string;
ngAfterViewInit(): void {
console.log(`Values: ${this.first}, ${this.second}`);
}
}
And this directive can be used like that:
<app-someKindOfComponent>
<app-directive [first]="'first 1'" [second]="'second 1'">A</app-directive>
<app-directive [first]="'First 2'" [second]="'second 2'">B</app-directive>
<app-directive [first]="'First 3'" [second]="'second 3'">C</app-directive>
</app-someKindOfComponent>`
Simple, neat and powerful.
look for param type
Other HTTP request methods, such as PUT and DELETE, can also be used here, but they are not supported by all browsers.
You can plot multiple subplots of multiple pandas data frames using matplotlib with a simple trick of making a list of all data frame. Then using the for loop for plotting subplots.
Working code:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# dataframe sample data
df1 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df2 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df3 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df4 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df5 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
df6 = pd.DataFrame(np.random.rand(10,2)*100, columns=['A', 'B'])
#define number of rows and columns for subplots
nrow=3
ncol=2
# make a list of all dataframes
df_list = [df1 ,df2, df3, df4, df5, df6]
fig, axes = plt.subplots(nrow, ncol)
# plot counter
count=0
for r in range(nrow):
for c in range(ncol):
df_list[count].plot(ax=axes[r,c])
count=+1
Using this code you can plot subplots in any configuration. You need to just define number of rows nrow
and number of columns ncol
. Also, you need to make list of data frames df_list
which you wanted to plot.
if you have long processing server side code, I don't think it does fall into 404 as you said ("it goes to a webpage is not found error page")
Browser should report request timeout error.
You may do 2 things:
Based on CGI/Server side engine increase timeout there
PHP : http://www.php.net/manual/en/info.configuration.php#ini.max-execution-time - default is 30 seconds
In php.ini:
max_execution_time 60
Increase apache timeout - default is 300 (in version 2.4 it is 60).
In your httpd.conf (in server config or vhost config)
TimeOut 600
Note that first setting allows your PHP script to run longer, it will not interferre with network timeout.
Second setting modify maximum amount of time the server will wait for certain events before failing a request
Sorry, I'm not sure if you are using PHP as server side processing, but if you provide more info I will be more accurate.
form
If the name attribute is specified, the form controller is published onto the current scope under this name.
Alias: ngForm
In Angular, forms can be nested. This means that the outer form is valid when all of the child forms are valid as well. However, browsers do not allow nesting of elements, so Angular provides the ngForm directive which behaves identically to but can be nested. This allows you to have nested forms, which is very useful when using Angular validation directives in forms that are dynamically generated using the ngRepeat directive. Since you cannot dynamically generate the name attribute of input elements using interpolation, you have to wrap each set of repeated inputs in an ngForm directive and nest these in an outer form element.
CSS classes
ng-valid is set if the form is valid.
ng-invalid is set if the form is invalid.
ng-pristine is set if the form is pristine.
ng-dirty is set if the form is dirty.
ng-submitted is set if the form was submitted.
Keep in mind that ngAnimate can detect each of these classes when added and removed.
Submitting a form and preventing the default action
Since the role of forms in client-side Angular applications is different than in classical roundtrip apps, it is desirable for the browser not to translate the form submission into a full page reload that sends the data to the server. Instead some javascript logic should be triggered to handle the form submission in an application-specific way.
For this reason, Angular prevents the default action (form submission to the server) unless the element has an action attribute specified.
You can use one of the following two ways to specify what javascript method should be called when a form is submitted:
ngSubmit directive on the form element
ngClick directive on the first button or input field of type submit (input[type=submit])
To prevent double execution of the handler, use only one of the ngSubmit or ngClick directives.
This is because of the following form submission rules in the HTML specification:
If a form has only one input field then hitting enter in this field triggers form submit (ngSubmit)
if a form has 2+ input fields and no buttons or input[type=submit]
then hitting enter doesn't trigger submit
if a form has one or more input fields and one or more buttons or input[type=submit]
then hitting enter in any of the input fields will trigger the click handler on the first button or input[type=submit]
(ngClick) and a submit handler on the enclosing form (ngSubmit).
Any pending ngModelOptions changes will take place immediately when an enclosing form is submitted. Note that ngClick events will occur before the model is updated.
Use ngSubmit to have access to the updated model.
app.js:
angular.module('formExample', [])
.controller('FormController', ['$scope', function($scope) {
$scope.userType = 'guest';
}]);
Form:
<form name="myForm" ng-controller="FormController" class="my-form">
userType: <input name="input" ng-model="userType" required>
<span class="error" ng-show="myForm.input.$error.required">Required!</span>
userType = {{userType}}
myForm.input.$valid = {{myForm.input.$valid}}
myForm.input.$error = {{myForm.input.$error}}
myForm.$valid = {{myForm.$valid}}
myForm.$error.required = {{!!myForm.$error.required}}
</form>
Source: AngularJS: API: form
I recommend you to generate an open format XML Excel file, is much more flexible than CSV.
Read Generating an Excel file in ASP.NET for more info
For an incoming request like /v1/location/1234
, as you can imagine it would be difficult for Web API to automatically figure out if the value of the segment corresponding to '1234' is related to appid
and not to deviceid
.
I think you should change your route template to be like
[Route("v1/location/{deviceOrAppid?}", Name = "AddNewLocation")]
and then parse the deiveOrAppid
to figure out the type of id.
Also you need to make the segments in the route template itself optional otherwise the segments are considered as required. Note the ?
character in this case.
For example:
[Route("v1/location/{deviceOrAppid?}", Name = "AddNewLocation")]
(Another solution using pivot_longer
& pivot_wider
from latest Tidyr
update)
You should try using pivot_longer to get your data from wide to long form Read latest tidyR update on pivot_longer & pivot_wider (https://tidyr.tidyverse.org/articles/pivot.html)
library(tidyverse)
C1<-c(3,2,4,4,5)
C2<-c(3,7,3,4,5)
C3<-c(5,4,3,6,3)
DF<-data.frame(ID=c("A","B","C","D","E"),C1=C1,C2=C2,C3=C3)
Output here
ID mean
<fct> <dbl>
1 A 3.67
2 B 4.33
3 C 3.33
4 D 4.67
5 E 4.33
From the Maven Embedder documentation:
-fae
,--fail-at-end
Only fail the build afterwards; allow all non-impacted builds to continue
-fn
,--fail-never
NEVER fail the build, regardless of project result
So if you are testing one module than you are safe using -fae
.
Otherwise, if you have multiple modules, and if you want all of them tested (even the ones that depend on the failing tests module), you should run mvn clean install -fn
.
-fae
will continue with the module that has a failing test (will run all other tests), but all modules that depend on it will be skipped.
The argparse
documentation is reasonably good but leaves out a few useful details which might not be obvious. (@Diego Navarro already mentioned some of this but I'll try to expand on his answer slightly.) Basic usage is as follows:
parser = argparse.ArgumentParser()
parser.add_argument('-f', '--my-foo', default='foobar')
parser.add_argument('-b', '--bar-value', default=3.14)
args = parser.parse_args()
The object you get back from parse_args()
is a 'Namespace' object: An object whose member variables are named after your command-line arguments. The Namespace
object is how you access your arguments and the values associated with them:
args = parser.parse_args()
print args.my_foo
print args.bar_value
(Note that argparse
replaces '-' in your argument names with underscores when naming the variables.)
In many situations you may wish to use arguments simply as flags which take no value. You can add those in argparse like this:
parser.add_argument('--foo', action='store_true')
parser.add_argument('--no-foo', action='store_false')
The above will create variables named 'foo' with value True, and 'no_foo' with value False, respectively:
if (args.foo):
print "foo is true"
if (args.no_foo is False):
print "nofoo is false"
Note also that you can use the "required" option when adding an argument:
parser.add_argument('-o', '--output', required=True)
That way if you omit this argument at the command line argparse
will tell you it's missing and stop execution of your script.
Finally, note that it's possible to create a dict structure of your arguments using the vars
function, if that makes life easier for you.
args = parser.parse_args()
argsdict = vars(args)
print argsdict['my_foo']
print argsdict['bar_value']
As you can see, vars
returns a dict with your argument names as keys and their values as, er, values.
There are lots of other options and things you can do, but this should cover the most essential, common usage scenarios.
The most common cause of this problem is that Matlab cannot find the file on it's search path. Basically, Matlab looks for files in:
pwd
);path
at the command line) @(whatever the class of the first argument is)
that is in any directory above.As someone else suggested, you can use the command which
, but that is often unhelpful in this case - it tells you Matlab can't find the file, which you knew already.
So the first thing to do is make sure the file is locatable on the path.
Next thing to do is make sure that the file that matlab is finding (use which) requires the same type as the first argument you are actually passing. I.el, If w
is supposed to be different class, and there is a divrat
function there, but w
is actually empty, []
, so matlab is looking for Double/divrat
, when there is only a @(yourclass)/divrat.
This is just speculation on my part, but this often bites me.
If you can't use the delay
method as Robert Harvey suggested, you can use setTimeout
.
Eg.
setTimeout(function() {$("#test").animate({"top":"-=80px"})} , 1500); // delays 1.5 sec
setTimeout(function() {$("#test").animate({"opacity":"0"})} , 1500 + 1000); // delays 1 sec after the previous one
It will compile to this:
React.createElement('div', this.props, 'Content Here');
As you can see above, it passes all it's props to the div
.
While it's not hard to do this manually using BufferedReader
and InputStreamReader
, I'd use Guava:
List<String> lines = Files.readLines(file, Charsets.UTF_8);
You can then do whatever you like with those lines.
EDIT: Note that this will read the whole file into memory in one go. In most cases that's actually fine - and it's certainly simpler than reading it line by line, processing each line as you read it. If it's an enormous file, you may need to do it that way as per T.J. Crowder's answer.
I had a similar problem where TextMate or something replaced the double quotes with the unicode double quotes.
Changing my SELENIUM_SERVER_JAR
from the unicode double quotes to regular double quotes and that solved my problem.
Here is a simple implementation of the wikipedia algorithm, using the javascript ternary operator:
isLeapYear = (year % 100 === 0) ? (year % 400 === 0) : (year % 4 === 0);
I think you will have fewer problems if you declared a Property that implements INotifyPropertyChanged, then databind IsChecked
, SelectedIndex
(using IValueConverter) and Fill
(using IValueConverter) to it instead of using the Checked Event to toggle SelectedIndex
and Fill
.
No, unlike in a lot of other languages, XSLT variables cannot change their values after they are created. You can however, avoid extraneous code with a technique like this:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="yes" omit-xml-declaration="yes"/>
<xsl:variable name="mapping">
<item key="1" v1="A" v2="B" />
<item key="2" v1="X" v2="Y" />
</xsl:variable>
<xsl:variable name="mappingNode"
select="document('')//xsl:variable[@name = 'mapping']" />
<xsl:template match="....">
<xsl:variable name="testVariable" select="'1'" />
<xsl:variable name="values" select="$mappingNode/item[@key = $testVariable]" />
<xsl:variable name="variable1" select="$values/@v1" />
<xsl:variable name="variable2" select="$values/@v2" />
</xsl:template>
</xsl:stylesheet>
In fact, once you've got the values
variable, you may not even need separate variable1
and variable2
variables. You could just use $values/@v1
and $values/@v2
instead.
Postgres hasn't implemented an equivalent to INSERT OR REPLACE
. From the ON CONFLICT
docs (emphasis mine):
It can be either DO NOTHING, or a DO UPDATE clause specifying the exact details of the UPDATE action to be performed in case of a conflict.
Though it doesn't give you shorthand for replacement, ON CONFLICT DO UPDATE
applies more generally, since it lets you set new values based on preexisting data. For example:
INSERT INTO users (id, level)
VALUES (1, 0)
ON CONFLICT (id) DO UPDATE
SET level = users.level + 1;
You can use simple color resources, specified usually inside res/values/colors.xml
.
<color name="red">#ffff0000</color>
and use this via android:background="@color/red"
. This color can be used anywhere else too, e.g. as a text color. Reference it in XML the same way, or get it in code via getResources().getColor(R.color.red)
.
You can also use any drawable resource as a background, use android:background="@drawable/mydrawable"
for this (that means 9patch drawables, normal bitmaps, shape drawables, ..).
You can create new User library,
On
"Configure Build Paths" page -> Add Library -> User Library (on list) -> User Libraries Button (rigth side of page)
and create your library and (add Jars buttons) include your specific Jars.
I hope this can help you.
You need to pass your components as children, like this:
var App = require('./App.js');
var SampleComponent = require('./SampleComponent.js');
ReactDOM.render(
<App>
<SampleComponent name="SomeName"/>
<App>,
document.body
);
And then append them in the component's body:
var App = React.createClass({
render: function() {
return (
<div>
<h1>App main component! </h1>
{
this.props.children
}
</div>
);
}
});
You don't need to manually manipulate HTML code, React will do that for you. If you want to add some child components, you just need to change props or state it depends. For example:
var App = React.createClass({
getInitialState: function(){
return [
{id:1,name:"Some Name"}
]
},
addChild: function() {
// State change will cause component re-render
this.setState(this.state.concat([
{id:2,name:"Another Name"}
]))
}
render: function() {
return (
<div>
<h1>App main component! </h1>
<button onClick={this.addChild}>Add component</button>
{
this.state.map((item) => (
<SampleComponent key={item.id} name={item.name}/>
))
}
</div>
);
}
});
This is better to remove NodeJS and its modules manually because installation leaves a lot of files, links and modules behind and later it create problems while we reconfigure another version of NodeJS and its modules. Run the following commands.
sudo rm -rf /usr/local/bin/npm /usr/local/share/man/man1/node* /usr/local/lib/dtrace/node.d ~/.npm ~/.node-gyp /opt/local/bin/node opt/local/include/node /opt/local/lib/node_modules
sudo rm -rf /usr/local/lib/node*
sudo rm -rf /usr/local/include/node*
sudo rm -rf /usr/local/bin/node*
and this done.
A step by step guide with commands is at http://amcositsupport.blogspot.in/2016/07/to-completely-uninstall-node-js-from.html
This helped me resolve my problem.
The short form(a += 1
) has the option to modify a
in-place , instead of creating a new object representing the sum and rebinding it back to the same name(a = a + 1
).So,The short form(a += 1
) is much efficient as it doesn't necessarily need to make a copy of a
unlike a = a + 1
.
Also even if they are outputting the same result, notice they are different because they are separate operators: +
and +=
My solution was to remove the Eclipse ADT plugin via menu "Help > About Eclipse SDK > Installation Details". Eclipse will restart.
Next go to Menu "Help > Install New Software", then add the ADT plugin url "https://dl-ssl.google.com/android/eclipse" (or select the existing link from the dropdown).
This will re-install the latest ADT, including the DDMS files.